text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Schwarzschild Schwarzschild–de Sitter de Sitter Schwarzschild–anti-de Sitter Kerr–de Sitter Kerr–Newman–de Sitterdd E Kma-xi-mumpa-ra-me-tersga-la-xiesgeo-met-ry sign daniel.charbulak@fpf.slu.czzdenek.stuchlik@fpf.slu.czInstitute of Physics and Research Centre of Theoretical Physics and Astrophysics, Faculty of Philosophy and Science, Silesian university in Opava, Bezručovo nám. 13, CZ-746 01 Opava, Czech Republic We study general motion of photons in the Kerr-de Sitter black hole and naked singularity spacetimes. The motion is governed by the impact parameters X, related to axial symmetry of the spacetime, and q, related to its hidden symmetry. Appropriate 'effective potentials' governing the latitudinal and radial motion are introduced and their behaviour is examined by 'Chinese boxes' technique giving regions allowed for the motion in terms of the impact parameters. Restrictions on the impact parameters X and q are established in dependence on the spacetime parameters M, Λ, a. The motion can be of orbital type (crossing the equatorial plane, q>0) and vortical type (tied above or bellow the equatorial plane, q<0). It is shown that for negative values of q, the reality conditions imposed on the latitudinal motion yield stronger constraints on the parameter X than that following from the reality condition of the radial motion, excluding existence of vortical motion of constant radius. Properties of the spherical photon orbits of the orbital type are determined and used along with properties of the effective potentials as criteria of classification of the KdS spacetimes according to the properties of the photon motion. Photon motion in spacetimes Zdeněk Stuchlík December 30, 2023 =============================§ INTRODUCTION In the framework of inflationary paradigm <cit.>, recent cosmological observations indicate that a very small relict vacuum energy (equivalently, repulsive cosmological constant Λ > 0), or, generally, a dark energy demonstrating repulsive gravitational effect, has to be introduced to explain dynamics of the recent Universe <cit.>. These conclusions are supported strongly by the observations of distant Ia-type supernova explosions indicating that starting at the cosmological redshift z ≈ 1 expansion of the Universe is accelerated <cit.>. The total energy density of the Universe is very close to the critical energy density ρ_crit corresponding to almost flat universe predicted by the inflationary scenario <cit.>, and the dark energy represents about 70% of the energy content of the observable universe <cit.>. These conclusions are confirmed by recent measurements of cosmic microwave background anisotropies by the space satellite observatory PLANCK <cit.>.The dark energy equation of state is very close to those corresponding to the vacuum energy <cit.>. Therefore, it is relevant to study the astrophysical consequences of the effect of the observed cosmological constant implied by the cosmological tests to be Λ≈ 1.3× 10^-56 cm^-2, and the related vacuum energy ρ_vac∼ 10^-29 g/cm^3, close to the critical density of the universe. The repulsive cosmological constant changes significantly the asymptotic structure of , naked singularity, or any compact-body backgrounds as such backgrounds become asymptoticallyspacetimes, and an event horizon (cosmological horizon) always exists, behind which the geometry is dynamic. Substantial influence of the repulsive cosmological constant has been demonstrated for astrophysical situations related to active galactic nuclei and their central supermassive black holes <cit.>. The black hole spacetimes with the Λ term are described in the spherically symmetric case by the vacuum (SdS) geometry <cit.>, while the internal, uniform density SdS spacetimes are given in <cit.>. The axially symmetric, rotating black holes are determined by the Kerr-de Sitter (KdS) geometry <cit.>. In the spacetimes with the repulsive cosmological term, motion of photons was extensively investigated in many papers <cit.>. The motion of massive test particles was studied in <cit.>. The KdS geometry can be relevant also for the so called Kerr superspinars representing an alternative to black holes <cit.>, breaking the black hole bound on the dimensionless spin and exhibiting a variety of unusual physical phenomena <cit.>. It is worth to note that the SdS and KdS spacetimes are equivalent to some solutions of the f(R) gravity representing black holes and naked singularities <cit.>. The role of the cosmological constant can be significant for both the geometrically thin Keplerian accretion discs <cit.> and the toroidal accretion discs <cit.> orbiting supermassive black holes (Kerr superspinars) in the central parts of giant galaxies. Both high-frequency quasiperiodic oscillations and jets originating at the accretion discs can be reflected by current carrying string loops in SdS and KdS spacetimes <cit.>. In the spherically symmetric spacetimes, the Keplerian and toroidal disc structures can be precisely described the Pseudo-Newtonian potential of Paczynski type <cit.> that appears to be useful also in studies of motion of interacting galaxies <cit.> demonstrating relation of the gravitationally bound galactic systems to the so called static radius of the SdS or KdS spacetimes<cit.>. This idea has been confirmed by the recent study of general relativistic static polytropic spheres in spacetimes with the repulsive cosmological constant <cit.>. The present paper is devoted to detailed study of properties of the photon motion in the KdS black hole and naked singularity spacetimes. We concentrate attention to the behavior of the effective potentials determining the regions allowed for the photon motion. Such a study is necessary for full understanding of the optical phenomena occuring in the black hole or naked singularity spacetimes with the repulsive cosmological constant. We generalize the previous work concentrated on the properties of the photon motion in the equatorial plane <cit.>, discussing properties of the effective potential of the latitudinal motion in terms of the motion constant related to the equatorial plane, and then continuing by study of the effective potential of the radial motion. We concentrate our study on the spherical photon orbits representing a natural generalization of the photon circular geodesics that enables a natural classification of the KdS spacetimes according to the properties of the null geodesics representing the photon motion.§SPACETIME AND CARTER'S EQUATIONS OF GEODESIC MOTION§.§geometryThe line element describing thegeometry is in the standard Boyer-Lindquist coordinates, using geometric system of units (c = G = 1), given bys^2 =- Δ_r/I^2 ρ^2( t - a sin^2θϕ)^2+ Δ_θsin^2θ/I^2ρ^2[at - (r^2 + a^2)ϕ]^2 + ρ^2/Δ_r r^2 + ρ^2/Δ_θθ^2,where Δ_r = (1 - 1/3Λ r^2)(r^2 + a^2) - 2Mr, Δ_θ = 1 + 1/3Λ a^2 cos^2θ,I = 1 + 1/3Λ a^2, ρ^2 = r^2 + a^2cos^2θ. Here, as usual, we denoted by M the mass of the central gravitating body, by a its specific angular momentum (a = J/M) and by Λ the cosmological constant. In order to simplify the discussion of the following equations, it is convenient to introduce a new cosmological parameter y = 1/3ΛM^2, and use dimensionless quantities, redefining them such that s/M → s, t/M → t, r/M → r, a/M → a, which is equivalent to putting M = 1. The above expressions then read Δ_r = (1 - y r^2)(r^2 + a^2) - 2r, Δ_θ = 1 + a^2 y cos^2θ,I = 1 + a^2 y, with equation (<ref>) being left unchanged.The physical singularity is located, as in the Kerr geometry, at the ring r=0,θ=π/2.The black hole horizons are determined by the condition Δ_r=0and their loci can be determined by the relation y = y_h(r; a^2)≡r^2-2r+a^2/r^2(r^2+a^2). The zeros ofy_h(r; a^2), determining the loci of black hole horizons in pure Kerr spacetimes, are given by the relationa^2=a^2_z(h)(r)≡ 2r-r^2,the loci of its extrema are given by the functionsa^2=a^2_ex(h)±(r)≡r(1-2r±√(1+8r))/2, where the function a^2_ex(h)-(r)<0 in its whole definition range, hence is irrelevant. The functions y_h(r; a^2),a^2_z(h)(r) and a^2_ex(h)±(r) will be needed in the section devoted to the discussion of the radial motion.Three event horizons, two black hole r_-,r_+, and the cosmological horizon r_c, (r_- < r_+ < r_c) exist for y_min(h)(a^2) < y < y_max(h)(a^2), where the limits y_min/max(h)(a^2) correspond to local minimum or local maximum of the function y_h(r; a^2), respectively, for given rotational parameter a. For 0<y<y_min(h)(a^2) or y>y_max(h)(a^2) naked singularity spacetimes exist. The limit case y=y_min(h)(a^2) corresponds to an extreme black hole spacetime, when the two black hole horizons coalesce. If y=y_max(h)(a^2), the outer black hole and cosmological horizon merge. There exists critical value of the rotational parameter a^2_crit=1.212 02, for which the two local extrema of the function y_h(r; a^2) coalesce in an inflection point at r_crit=1.616 03 with the critical value y_crit=0.0592. Thus, for a^2>a^2_crit onlynaked singularity can exist for any y>0.Properties of the event horizons for the more general case of thespacetimes can be found in <cit.>. §.§ Carter's equations of geodesic motion The motion of test particles and photons following its geodesics in thespacetime is described by the well known Carter equations <cit.>ρ^2 θ/λ = ±√(W(θ; E, Φ, 𝒦, y, a)),ρ^2r/λ = ±√(R(r; E, Φ, 𝒦, y, a)), ρ^2 φ/λ = aI^2[E(r^2+a^2)-aΦ]/Δ_r- I^2[aEsin^2θ-Φ]/Δ_θsin^2θ, ρ^2t/λ = I^2(r^2+a^2)[E(r^2+a^2)-aΦ]/Δ_r- aI^2[aEsin^2θ-Φ]/Δ_θ, whereW(θ; E, Φ, 𝒦, y, a) = 𝒦Δ_θ-I^2(aEsin^2θ-Φ)^2/sin^2θ andR(r; E, Φ, 𝒦, y, a) = [IE(r^2 + a^2) - IaΦ]^2 - Δ_r 𝒦. Here E and Φ are the constants of motion connected respectively with the time and axial symmetry of thegeometry, and 𝒦 is the fourth Carter constant of motion connected with the hidden symmetry of thegeometry. Another constant of motion is the rest mass m (energy) of the test particle; forphotons m = 0. Recall that E and Φ cannot be interpreted as energy and axial component of the angular momentum of the test particle at infinity, because, due to the presence of the cosmological Λ term, the geometry is not asymptotically flat, but de Sitter <cit.>. Detailed discussion of the equatorial motion of photons in thespacetimes has been published in <cit.>. Circular motion of test particles in thespacetimes has been presented in <cit.>. Here we restrict our attention on the general motion of photons in thespacetimes. In fact, the motion of photons is independent of the constant of motion E and depends only on the ratio Φ/E (E ≠ 0), usually referred to as the impact parameter ℓ, and on the parameter 𝒦/E^2. For our general discussion it is convenient to use Q = 𝒦 - I^2(Φ - a E)^2 that vanishes for the equatorial motion. For our purposes it is, however, following the paper <cit.>, convenient to introduce a new constants of motion X ≡ℓ - a. Further the constant q ≡ Q/I^2E^2 will be applied. Then the relations (<ref>) and (<ref>) simplify to the formW(θ; X, q, y, a) ≡ I^2E^2[(X^2+q)Δ_θ-(acos^2θ+X)^2/sin^2θ],R(r; X, q, y, a)≡ I^2E^2[(r^2 - aX)^2 -Δ_r(X^2 + q)].Following the work <cit.> we study the general photon motion in terms of the parameter X. However, since we consider the non-equatorial motion here, it is also necessary to find out the restrictions to be imposed on parameter X that follows from the reality conditions of the latitudinal motion.The latitudinal motion in thespacetimes has been investigated yet <cit.>; however, the discussion has been related to the motion constant 𝒦. Here we give the discussion of the effective potential of the latitudinal motion related to the motion constant Q, as it is convenient for the purposes of our study.§ LATITUDINAL MOTION Because it is more convenient to work with algebraic functions instead of trigonometric ones, we introduce a new variablem = cos ^2 θ,m=2(θ-π/2)√(m(1-m))dθ. This implies replacing the equation (<ref>) by ρ^2m/λ = ± 2 √(M(m; X, q, y, a)), whereM(m; X, q, y, a)≡ I^2E^2m[(1-m)(X^2+q)Δ_m-(am+X)^2]with notation Δ_m=1+a^2ym. Note that m/λ =0 does not necessarily imply θ/λ=0, since it can mean just transit through the equatorial plane or polar axis. Therefore, in some cases, in order to avoid any doubts, we rather discuss the behaviour of the function (<ref>).The reality condition M(m; a, y, X, q)≥0 can be expressed by the relations X^θ_-(m; q, y, a)≤ X≤ X^θ_+(m; q, y, a)in regions where Δ_m-a^2y > 0, i. e., equivalently,m > m_d,where m_d = 1-1/a^2y is the solution of the equation Δ_m-a^2y = 0,and by the relations X≤ X^θ_+(m; q, y, a), X^θ_-(m; q, y, a)≤ X, in regions where Δ_m-a^2y < 0, i. e., m < m_d, which requires y > 1/a^2. The functions X^θ_±(m; q, y, a), regarded as 'effective potentials' governing the latitudinal motion, are defined by X^θ_±(m; q, y, a) ≡ -am ±√(m(1-m)Δ_m [a^2m+q(Δ_m-a^2y)])/m(Δ_m-a^2y).The functions X^θ_±(m; q, y, a) thus determine the regions allowed for the latitudinal motion, conditions X=X^θ_±(m; q, y, a) give the turning points.In order to understand the behaviour of the functions X^θ_±(m; q, y, a), it is necessary to find the reality regions, and loci of its local extrema and divergencies. Following <cit.>, we shall perform this analysis using the well known procedure called 'Chinese boxes technique'and adopting labelling of the appropriate characteristic functions in similar way. The parameters are of various significance - q is a constant of motion, whereas a,y govern the geometry. The natural choice is therefore to give the properties of the potentials X^θ_±(m; q, y, a) by family of functions q(m; y, a), and properties of these functions by another families of functions of variable m with parameters lowered by one, with spacetime parameters excluded at last.In the following analysis the relevant range of variable m is, of course, 0≤ m≤1, but somewhere, in order to better understand the behaviour of the characteristic functions, we formally permit m∈ R. First we shall determine the reality region of X_±(m). It is given byq ≥q^θ_r(m; y, a^2)Δ_m-a^2y > 0, q ≤q^θ_r(m; y, a^2)Δ_m-a^2y < 0, whereq^θ_r(m; y, a^2)≡a^2m/a^2y-Δ_m.Of course, this function also determines the common points of the potentials X^θ_-(m; q, y, a) andX^θ_+(m; q, y, a), which values are thenX^θ_c=X_(±)(m; q=q^θ_r, y, a) = X_(±)(m; y, a)≡a/a^2y-Δ_m.Of particular importance, if defined, is the value X_(±)(1; y, a)=-a (see bellow).The divergency points of the functions q^θ_r(m; y, a^2),X_(±)(m; y, a) and X^θ_-(m; q, y, a) are determined byy=y^θ_d(m; a^2)≡1/a^2(1-m).Both the functions X^θ_±(m; q, y, a) can diverge, if well defined, for m=0, another divergencies are given by the function y^θ_d(m; a^2) for the potential X^θ_-(m; q, y, a), but there are no other divergencies for the potential X^θ_+(m; q, y, a), as can be seen, if we rewrite the definition (<ref>) in an alternative form X^θ_±(m; q, y, a)= a^2m^2-q(1-m)Δ_m/-am∓√(m(1-m)Δ_m[a^2m+q(Δ_m-a^2y)]).The function y^θ_d(m; a^2)→∞ for m→ 1 from the left. There are no local extrema of this function and for 0≤ m<1 it is increasing. For m=0 we get y^θ_d(0; a^2)=1/a^2. The point m_d given by the definition (<ref>) determines the loci where the functions q^θ_r(m; y, a^2),X_(±)(m; y, a) and X^θ_-(m; q, y, a) diverge; it occurs at relevant interval (0;1) for y>1/a^2 and m_d→ 1 for a^2y→∞. In such case, q^θ_r(m; y, a^2)→ +∞ (-∞) for m→ m_d from the left (right). From the equality∂ q^θ_r/∂ m = a^2(a^2y-1)/(Δ_m-a^2y)^2one can see that the function q^θ_r(m; y, a^2) has no local extrema and is decreasing for y<1/a^2, or piecewise increasing with the discontinuity point m_d for y>1/a^2, i. e., q^θ_r(m; y, a^2) → +∞ (-∞) for m → m_d from the left (right). It always holds q^θ_r(m=0; y, a^2)=0 and q^θ_r(m=1; y, a^2)=-a^2. In the special case y=1/a^2 we get q^θ_r(m; y=1/a^2, a^2)=const.= -a^2 m≠ 0 with lim_m→ 0 q^θ_r(m; y=1/a^2, a^2)= -a^2. Based on the conditions (<ref>), (<ref>) and the above characteristic functions, we can complete setting the definition range of the potentials X^θ_±(m; q, a, y), which we leave to the end of this section. Now we shall determine the loci of local extrema of the effective potentials X^θ_±(m; q, y, a). They can be derived from the condition ∂ X^θ_±/∂ m =0, which implies the equation(a^2m^2+q)(Δ_m-a^2y)[a^2m^2I^2+q(1-a^2y+2a^2my)^2]=0.It can be verified that the function X^θ_+(m; q, y, a) has local extrema given by the relation q=q^θ_ex(+)(m; a^2)≡ -a^2m^2.A discussion of this function is trivial, so we only note that it is independent of the cosmological parameter y and renders the loci of extrema only for -a^2≤ q ≤ 0, while, as we shall see below, they can exist even for q<-a^2.The character of these extrema reveals inserting this expression into the second derivative, which yields∂^2X^θ_+/∂ m^2(m; q=q_ex(+), y, a)=-a/m(1-m)Δ_m,clearly they must be maxima. From the equation (<ref>) we find that another extrema of the potentials X^θ_±(m; q, y, a)are determined by the condition q=q^θ_ex(±)(m; y, a^2)≡-a^2m^2I^2/[Δ_m-a^2y(1-m)]^2.The divergencies of the functions q^θ_ex(±)(m; y, a^2) are determined by the relationy=y^θ_d(ex±)(m; a)≡1/a^2(1-2m).The function y^θ_d(ex±)(m; a^2) is positively valued at 0≤ m<0.5, where y^θ_d(ex±)(m; a^2)→ +∞ for m→ 0.5 from the left. For m=0, there is y^θ_d(ex±)(0; a^2)=y^θ_d(r)(0; a^2)=1/a^2. From the properties of the function y^θ_d(ex±)(m; a^2) we deduce that the function q^θ_ex(±)(m; y, a^2) can diverge only if y>1/a^2, atm=m_d(ex)≡ 0.5 (1-1/a^2y)=0.5 m_d located such that 0≤ m_d(ex) <0.5. Obviouslyq^θ_ex(±)(m; y, a^2) → -∞ m→ m_d(ex). In the following we shall decide about the monotony andpossible existence of local extrema of the function q_ex(±). From ∂ q^θ_ex(±)/∂ m = 2m(a^2y-1)a^2I^2/[Δ_m-a^2y(1-m)]^3it is clear that there are no local extrema of q^θ_ex(±)(m; y, a^2) in the interval m∈ (0;1). The derivative changes its sign at the divergent point m_d(ex), which reflects the behaviour given by (<ref>). For y<1/a^2, there is ∂ q^θ_ex(±)/∂ m < 0 m∈⟨0;1⟩, that is, q^θ_ex(±)(m; y, a^2) is decreasing. In the limit case y=1/a^2, we get q^θ_ex(±)(m; y=1/a^2, a^2)=-a^2=q^θ_r(m; y=1/a^2, a^2). Comparing both the functions q^θ_ex+(m; a^2) and q^θ_ex(±)(m; y, a^2), we find that q^θ_ex(±)(m; y, a^2)≤ q^θ_ex+(m; a^2)≤ 0 and have common points at m = 0, 1 with q^θ_ex+(0; a^2)= q^θ_ex(±)(0; y, a^2)=0 and q^θ_ex+(1; a^2)= q^θ_ex(±)(1; y, a^2)=-a^2. In the next step we shall characterize the extrema given by q^θ_ex(±)(m; y, a^2). First we find that ∂ X^θ_±/∂ m(m; q=q^θ_ex(±), y, a)= a^3y{[1-a^2y(1-2m)]±(1-a^2y)}/[1-a^2y(m-1)]^3. If we now require ∂ X^θ_+/∂ m(m; q=q^θ_ex(±), y, a) =0 somewhere at 0 < m < 1,we obtain a condition m>m_d(ex) y>1/a^2, which ensures∂^2X^θ_+/∂ m^2(m; q=q^θ_ex(±), y, a) =aI^2/m(1-m)Δ_m(1-a^2y)[Δ_m-a^2y(1-m)] < 0. Considering the previous results, we can conclude that the functions q^θ_ex(±)(m; y, a^2) determine local maxima of the potential X^θ_+(m; q, y, a) for q<-a^2 that occur on this curve in the case y>1/a^2 in the interval m ∈ (m_d(ex);1). Proceeding the same way with the function X^θ_-(m; q, y, a), we first find that the equation ∂ X^θ_-/∂ m(m; q=q^θ_ex(±), y, a) =0 has always solution for some m∈ (0; 1) in the case y<1/a^2, but for y≥ 1/a^2 this solution must fulfil m<m_d(ex). Substituting q=q^θ_ex(±) into the second derivative of X^θ_-(m; q, y, a) yields the same expression as that in (<ref>), but now with the above conditions we have ∂^2X^θ_-/∂ m^2(m; q=q^θ_ex(±), y, a)>0, indicating local minima. Therefore, the function q^θ_ex(±) gives local minima of X^θ_-(m; q, y, a) for y<1/a^2 at the whole interval m ∈ (0;1), and for y>1/a^2 at m ∈ (0;m_d(ex)). In the special case y=1/a^2, the function q^θ_ex(±) reduces to the formq^θ_ex(±)(m; y=1/a^2, a^2)=-a^2;we can easily convince ourself that the function X^θ_- has no local extremum in such case, and the extrema of X^θ_+ are given by the function q^θ_ex+(m; y, a).The conditions (<ref>), (<ref>) ensuring the allowance of the latitudinal motion must be complemented by case when the functions X^θ_± are not defined. Their definition range is given by relations (<ref>), (<ref>), but it can be shown that the violation of the latter one imply M(m;X, q, y, a)>0. In such case, the latitudinal motion is allowed for any impact parameter X (see the details in the discussion bellow).All characteristic functions are depicted in Fig. 1. and the graphs of the potentials in Fig. 2 for selected representative values of parameters. Now we are able to discuss the behaviour of the potentials X^θ_±(m; q, y, a) for various representative values of its parameters.The intersections of a line X = with the curves X^θ_±(m; q, y, a) represents the turning points in variable m. From the knowledge of these functions we can thus get qualitative insight into the character of the latitudinal motion. This entitles us to following classification ofspacetimes and brief description of the latitudinal motion. The basic division apparently consists of cases y<1/a^2, y=1/a^2, y>1/a^2: *Case y<1/a^2 * q<-a^2 – the definition range of the potentials is an empty set; the latitudinal motion is not possible;* q=-a^2 – the potentials X^θ_±(m; q, y, a) are defined only for m=1, whereX^θ_+(1; q, y, a)=X^θ_-(1; q, y, a)=-a; photons with such values of parameters are the special case of the so called PNC photons 'radially' moving along the spin axis <cit.>;* -a^2<q<0 (Fig. 2a, 2b)–both the potentials are defined for m∈⟨ m_l;1⟩, where the lower limitm_l = q(a^2y-1)/a^2(qy+1)>0is the solution of the equationq=q^θ_r(m; y, a);the limits of the interval are the common points of the potentials, whereX^θ_±(m=m_l; q, y, a)=X^θ_(±)(m_l)=a(1+qy)/a^2y-1<0;–the latitudinal motion is allowed for values of the parameter X between some local minimum X^θ_min(-)=X^θ_-(m_min(-); q, y, a) and maximum X^θ_max(+)=X^θ_+(m_max(+); q, y, a), for whichX^θ_min(-)<-a<X^θ_max(+)<0;the loci m_min(-) of minimum X^θ_min(-) is given by the equation (<ref>), the loci m_max(+) of maximum X^θ_max(+) is determined by relation (<ref>);– if X takes one of these extremal values, then the trajectory of such photon lies entirely on cones θ=arccos√(m_ex),θ=π -arccos√(m_ex), where m_ex∈{m_min(-),m_max(+)}; such photons are called PNC photons <cit.>;– for X^θ_min(-)<X<X^θ_max(+) there are two solutions m_1<m_2 of each of the two equations X=X^θ_±(m; q, y, a), implying that photon executes so called vortical motion, which is restricted between two pairs of cones, symmetrically placed relative to equatorial plane:0 < arccos√( m_2)≤θ≤arccos√(m_1) < π/2andπ/2 < π- arccos√(m_1)≤θ≤π-arccos√(m_2) < π;– in the special case X=-a one of the turning points is m_2=1, which represents transit through the spin axis; such photon therefore oscillate above one of the poles in cone which is delimited by the angle θ=arccos√(m_1); – from the preceding discussion it follows that we can expect that the case X=-a represents a change in azimuthal direction with respect to some privileged family of observers;* q=0 (Fig. 2c)–the expression in the definition (<ref>) can be reduced toX^θ_±(m; y, a)=-a(1∓√((1-m)Δ_m))/Δ_m-a^2y,which validity can be enlarged, without any repercussion on the correctness of the analysis, even for m=0; the definition range of the potentials is thus ⟨0;1⟩; – from the equality W(θ=π/2; X, q, y, a)=q it follows that at least in the equatorial plane the (radial) motion always exists for q=0, where it can be both stable or unstable (see bellow); for q>0 the equatorial plane is crossed, for q<0 it can not be reached;–there are no extrema of the potentials - X^θ_+(m; q, y, a) is decreasing, X^θ_-(m; q, y, a) is increasing;the permissible values of X for which θ/λ>0 are still confined to an interval with limitsX^θ_min(-) = X^θ_-(m=0; q=0, y, a) = 2a/(a^2y-1)X^θ_max(+) = X^θ_+(m=0; q=0, y, a)=0,where X^θ_min(-)<X^θ_max(+); –if X≤ X^θ_min(-) or X≥ X^θ_max(+) then the requirement W(θ)≥0 is fulfilled only if θ=π/2, and in such case θ/λ=0, thus the motion is stably confined to the equatorial plane;–for X^θ_min(-)<X<X^θ_max(+) photon initially released in the direction off the equatorial plane is once reflected at θ=arccos√(m_0) or θ=π-arccos√(m_0) respectively, where m_0 denotes the only solution of X=X^θ_±(m; q, y, a); another point where θ/λ=0 is now in the equatorial plane, however the equality ^2θ/λ ^2=0 implies halting in the latitudinal direction; the function W(θ) has at θ=π/2 local minimum, which indicates, as follows from perturbation analysis, instability in the equatorial plane;–if specially X=-a then m_0=1, thus photon initially directed off the equatorial plane crosses the spin axis and finally is captured in the equatorial plane;* q>0 (Fig. 2d)–the potentials are defined for m∈ (0,1⟩; they are monotonous in the same manner as in the case q=0, but X^θ_+(m; q, a, y)→ +∞ and X^θ_-(m; q, y, a)→ -∞ as m→ 0; –from the behaviour of the potentials it follows that for X≠-a photon is forced to oscillate in θ-direction through the equatorial plane between two cones governed by arccos√(m_0)≤θ≤π-arccos√(m_0), with m_0 of the same meaning as above;–case X=-a represents the motion above both poles;–the foregoing conclusion is a reason to have a suspicion that cases X<-a and X>-a differ in the azimuthal direction relative to some family of stationary observers, it corresponds to ℓ>0 and ℓ<0;*Case y=1/a^2 –the potentials simplify into the formX^θ_±(m; q, a)=-a±√((1-m^2)(q+a^2))/m; * q<-a^2 –the potentials are not defined, thus the latitudinal motion is not allowed;* q=-a^2 –the curves X=X^θ_±(m; q=-a^2, y=1/a^2, a) coalesce, sinceX^θ_+(m;q=-a^2,y=1/a^2,a) = X^θ_-(m;q=-a^2,y=1/a^2,a) =X^θ_(±)(m; a)≡-a/m; –for X≤-a there is one solution of the equation X=X^θ_(±)(m; a), which gives m=m_(±)≡ -a/X; this corresponds to PNC photons moving along cones θ=arccos√(m_(±)),θ=π-arccos√(m_(±)); –for X→ -∞ the cones approach the equatorial plane;–if specially X=-a the cones degenerate to spin axis, therefore, such PNC photons move along the spin axis;–for X>-a there is no motion allowed;* -a^2<q<0 (Fig. 2e, 2f)–the potentials are both defined for m∈ (0;1⟩; there is one local maximum X^θ_max(+) given by (<ref>) of the function X^θ_+(m; q, y, a) and no extremum of X^θ_-(m; q, y, a); it holds X^θ_-(m; q, y, a)<X^θ_+(m; q, y, a)<0 and X^θ_-(m; q, y, a),X^θ_+(m; q, y, a)→ -∞ as m → 0 from the right;–if X<-a or -a<X<X^θ_max(+), the vortical motion exists;–for X=-a the 'inner' cones coalesce with the spin axis, thus the vortical motion involves crossing the poles;–for X=X^θ_max(+) both the 'inner' and 'outer' cones coalesce, giving thus rise to PNC photons;–if X>X^θ_max(+), no motion is allowed;* q=0 (Fig. 2g)–the same discussion holds as in the case y<1/a^2, except that the motion exists for X arbitrarily small;* q>0 (Fig. 2h)–the same conclusions holds as in the case y<1/a^2,; *Case y>1/a^2 * q<-a^2 (Fig. 2i)–the definition range of both potentials is an interval (0;m_u⟩ (see the purple curve in Fig. 1d), where the upper limit m_u<1 is given as m_l in the previous case by (<ref>);–there is X^θ_+(m; q, y, a)→ -∞ and X^θ_-(m; q, y, a)→ +∞ as m→ 0, moreover, X^θ_-(m; q, y, a) now diverge at m=m_d, which is the solution of (<ref>), and X^θ_-(m; q, y, a)→ +∞ (-∞) as m→ m_d from the left (right);–there are thus two regions of permissible values X in the (m,X)-plane for which the motion can exist; the lower one bounded by the graph of X^θ_+ and the lower branch of X^θ_-, which at m=m_u join into continuous curve, and the upper region given by the upper branch of X^θ_-; the motion is therefore allowed for X≤ X^θ_max(+)<-a or X≥ X^θ_min(-)>0, where the loci of local extrema X^θ_max(+),X^θ_min(-) are given by (<ref>) (see the blue curve in Fig. 1d);–if X<X^θ_max(+) or X>X^θ_min(-) photon executes vortical motion, cases X=X^θ_max(+), X=X^θ_min(-) correspond to PNC photons;–for X=X^θ_-(m_u)=X^θ_+(m_u)=a(1+qy)/(a^2y-1), the inner cones delimiting the vortical motion are the narrowest;–for X→ -∞ or X→ +∞ the outer cones given by anglesθ=arccos√(m_1),θ=π-arccos√(m_1)approach the equatorial plane since m_1→ 0; for the inner conesθ=arccos√(m_2),θ=π-arccos√(m_2),there ism_2→ m_d=1-1/a^2y;* q=-a^2 (Fig. 2j) –there is no local extremum of the function X^θ_+(m; q, y, a), which is now increasing; it holds m_u=1,X^θ_-(m_u)=X^θ_+(m_u)=X^θ_+(max)=-a, hence for X=-a both the inner and outer cones coalesce with the spin axis, which again corresponds to 'axial' PNC photon; –another PNC photons exist for X=X^θ_min(-)>0; –there are no other qualitative differences from the case q<-a^2; * -a^2<q<0 (Fig. 2k) –the definition range is an interval (0;1⟩ and the divergencies of the potentials are the same as above;–the function X^θ_+(m; q, y, a) has now local maximum X^θ_max(+),-a<X^θ_max(+)<0,X^θ_max(+)→ 0 for q→ 0, determined by equation (<ref>);–case X=-a now corresponds to vortical motion above the poles - the inner cones have coalesced with the spin axis, the outer ones stay open;–the vortical motion exists as in the previous cases and above that for -a<X<X^θ_max(+); * q=0 (Fig. 2l) –the definition (<ref>) holds, the functions X^θ_+(-)(m; q, y, a) are defined at ⟨0,1⟩ (⟨0,1⟩∖{m_d}); the values for m=0 are given by (<ref>), but nowX^θ_max(+)<X^θ_min(-); –the potential X^θ_+(m; y, a) is decreasing in its whole definition range, X^θ_-(m; y, a) is piecewise increasing because of the divergent point m_d; –if X≤ X^θ_max(+)=0 or X≥ X^θ_min(-) then the same conclusions can be made as in the case y<1/a^2 for X^θ_min(-)≤ X≤ X^θ_max(+); –for X^θ_max(+)< X< X^θ_min(-) it holds W(θ=π/2; X, q=0, y, a) =0 again, otherwise W(θ; X, q=0, y, a)<0, therefore photons can radially move in the equatorial plane;* q>0 (Fig.2m, 2n) –the function X^θ_+(m; q, y, a) is defined at ⟨ m_l,1⟩, the function X^θ_-(m; q, y, a) at ⟨ m_l,1⟩∖{m_d}, where m_l is given by (<ref>) with the difference that now X^θ_(±)(m_l)>0; the graphs of both functions now form a single open curve, which intersects a line X=const. at a single point;–in the interval m ∈⟨ 0; m_l ⟩ the latitudinal motion is allowed for arbitrarily large or small value of the motion constant X; –for arbitrary X≠ -a there exists oscillatory motion through the equatorial plane as described in the case y<1/a^2, q>0; –if X=X_(±)(m_l) the boundary cones are closest to equatorial plane, they are given by anglesθ=arccos√(m_l),θ=π-arccos√(m_l);–the case X=-a corresponds to orbits above both poles crossing also the equatorial plane;–there is no vortical motion or PNC photons;We finish this section with setting the allowed region in the (X,q)-plane delimiting such combinations of the motion constants, for which the latitudinal motion is possible, in dependence on the spacetime parameters a,y. From the requirement that the function M(m; a, y, X, q) defined in the relation (<ref>) has to be non-negative somewhere in the interval m ∈⟨0;1⟩, one can derive that the allowed region of the (X-q)-plane is determined by the condition q≥ q_min(X, y, a), where q_min(X, y, a) is defined using functionsq_1(X)≡ -X^2and q_2(X; y, a)≡ -I^-2[(1-a^2y)X+2a]^2 as follows (see Fig. 3 and Fig. 4):*Case y<1/a^2 (Fig. 3a)q_min(X, y, a) ≡{[0, X < 2 a/a^2 y - 1 X > 0;; ; q_2(X; y, a),2 a/a^2y-1≤ X < -a;; ;q_1(X), -a ≤X ≤0;;].*Case y=1/a^2 (Fig. 3b)q_min(X, y, a) ≡{[ -a^2, X ≤ -a;;; q_1(X), -a ≤ X ≤ 0;;;0, X≥ 0; ].*Case y>1/a^2 (Fig. 3c)q_min(X, y, a) ≡{[ q_2(X; y, a), X < -a; ; 2 a/a^2 y - 1 < X;; ;q_1(X), -a ≤ X ≤ 0;; ; 0, 0 < X ≤2 a/a^2 y - 1; ].The case y<1/a^2 qualitatively corresponds to both black hole and naked singularity spacetimes, the other two cases y=1/a^2 and y>1/a^2 describe the naked singularity spacetimes (see Fig. <ref> in the next section). The q=const.- slices of the function q_min(X, y, a) give for q<0 extremal values X_min(-),X_max(+) of the potentials X^θ_±(m; q, y, a) discussed in the text. § RADIAL MOTIONFrom the equation (<ref>) it is clear that the radial motion can exist if R(r)≥ 0, where the equality gives the turning points of the radial motion. This condition can be rewritten in terms of an `effective potential` X_± in the formX≤ X_-or X≥ X_+, if a^2 - Δ_r > 0 (and X_- < X_+), orX_+≤ X ≤ X_-if a^2 - Δ_r < 0, whereX_±(r;q, y, a)≡ar^2±√(Δ_r[r^4+q(a^2-Δ_r)])/a^2-Δ_r. We start the analysis by determining the reality region of the effective potential X_±. From the expression (<ref>) it follows that this function is well defined for q {[ ≤ q_r(r; y, a^2) a^2 < Δ_r Δ_r ≤ 0;≥q_r(r; y, a^2) 0≤Δ_r < a^2 ]., where we have introduced the reality functionq_r(r; y, a^2)≡r^4/Δ_r-a^2= r^3/r-2-yr(r^2+a^2).There are thus two different types of the boundary points of the definition range of X_±(r; q, a, y). The points of the first type lie stably for given spacetime parameters on the borders of the static regions determined by the relation <ref>,i. e. at the event horizons (r=r_h). At these horizons, if they exist, for arbitrary parameter q, the functions X_± have common valuesX_+(r_h) = X_-(r_h) = r_h^2/a (c. f.<cit.>). The points of the second type, which are also common points of X_±, depend on the value of parameter q and are given by the equality q = q_r(r; y, a^2). If we denote them r = r_q then it holdsX_+(r_q) = X_-(r_q) = -a q/r_q^2.The divergencies of the function q_r(r; y, a^2), which are incident with divergent points of X_+(r; q, y, a), are located at radii where Δ_r=a^2, which one can express by the relationy=y_d(r; a^2)≡r-2/r(r^2+a^2).The function X_-(r; q, y, a) can not diverge at radii r_d given by (<ref>), since using an alternative expressionX_±(r; q, y, a)≡r^4-qΔ_r/ar^2∓√(Δ_r[r^4+q(a^2-Δ_r)]) it can be shown that it has finite value X_-(r_d; q, y, a)=r_d^4-qa^2/2ar_d^2. Another point where the functions X_±(r; q, y, a) diverge is r=0, with X_±(r; q, y, a)→±∞ as r→ 0 for q>0, butfor q=0 it holds X_±(r; q=0, y, a)→ 0.The character of the function y_d(r; a^2) has been discussed in <cit.>, therefore we briefly repeat that the only zero of y_d(r; a^2) is at r=2, the extrema, which for a^2>0 must be maxima, yields the relationa^2=a^2_max(d)(r)≡ r^2(r-3).. The only zero of q_r(r; y, a^2) is at r = 0. For r →∞ it holds q_r(r; y, a) → -1/y. Its extrema are determined byy = y_ex(r)(r;a^2) ≡r-3/a^2r.The divergency of y_ex(r)(r;a^2) is at r = 0 and y_ex(r)(r;a^2)→ -∞ for r→ 0. For r →∞ it approaches the line 1/a^2 from bellow. The zero is at r = 3 and its extrema do not exist, the function is purely increasing. Now we shall specify the local extrema of the effective potential, which determine the radii of spherical photon orbits. They are given by the condition ∂ X_±/∂ r = 0, which impliesr^4+a^2q=0,or qa^2[2yr^3+(ya^2-1)r+1]^2+ r^3[y^2a^4r^3+2ya^2r^2(r+3)+ r(r-3)^2-4a^2]=0. This can be rewritten in terms of parameter q asq=q_ex1(r; a^2)≡ -r^4/a^2,andq = q_ex(r; y, a^2)≡-r^3/a^2 y^2a^4r^3+2ya^2r^2(r+3)+r(r-3)^2-4a^2/[2yr^3+(ya^2-1)r+1]^2. Note that the function q_ex1(r; a^2) is independent ofthe cosmological parameter. Both the functions q_ex1(r; a^2),q_ex(r; y, a^2) have common points determined byy=r^2-2r+a^2/r^2(r^2+a^2)=y_h(r; a^2),andy=1/r^3,i. e., they are located at event horizons and so called static radius r_s=1/√(y), where the gravitational attraction is just compensated by cosmological repulsion <cit.>. The function q_ex1(r; a^2) is negative valued and hence, as we shall see bellow, the extrema of the potentials X_± determined by this function lie in regions forbidden by conditions for the reality of latitudinal motion.The divergencies of q_ex(r; y, a^2) are determined by the relationy=y_d(ex)(r; a^2)≡r-1/r(2r^2+a^2), its asymptotic behaviour is given by q_ex(r; y, a^2)→ -(I/2ay)^2 as r→∞.The function y_d(ex)(r; a^2) diverges for r=0 and y_d(ex)(r; a^2)→ -∞ as r→ 0. For r→∞ it holds y_d(ex)(r; a^2)→ 0. The zero of this function is at r=1 and its local extrema are determined by the relationa^2=a^2_max(d(ex))(r)≡ 2r^2(2r-3), where the label 'max' indicates that at relevant range r≥ 3/2, these extrema must be maxima.The zero point of the function q_ex(r; y, a^2) is at r=0, another zeros determine the loci of the circular equatorial photon orbits. They are given by the relationy=y_z(ex)±(r; a^2)≡-r(r+3)± 2√(r(3r^2+a^2))/a^2r^2. Since the function y_z(ex)-(r; a^2) < 0 for r>0, it is irrelevant in our discussion. The function y_z(ex)+(r; a^2) is real valued for all r>0 and it diverges at r=0, with y_z(ex)+(r; a^2)→∞ as r→ 0. For r→∞, we find y_z(ex)+(r; a^2)→ -1/a^2. Its zeros represent the equatorial circular photon orbits in the Kerr spacetimes, being determined by the relation <cit.> a^2=a^2_z(z(ex)+)(r)≡r(r-3)^2/4. The extrema of the function y_z(ex)+(r; a^2) are determined by the equationa^2=a^2_ex(z(ex)+)±(r)≡r(1-2r±√(1+8r))/2=a^2_ex(h)±(r), hence the loci of extrema of the functions y_z(ex)+(r; a^2) and y_h(r; a^2) coalesce. The function a^2_ex(z(ex)+)-(r) should be excluded from further analysis since for r>0 there is a^2_ex(z(ex)+)-(r)<0. It remains to determine loci of the local extrema of the function q_ex(r; y, a^2). Proceeding the usual way we find that their occurrence is governed by the relationsy=y_ex(ex)(r; a^2)≡r-3/a^2r=y_ex(r)(r;a^2)andy=y_ex(ex)±(r; a^2)≡ 3r^2√(r)-a^2√(r)(3+2r)±√((4a^2-3r)(a^4+6a^2r^2-3r^4))/2a^4√(r^3).Using the relation (<ref>), one can show that the extrema of both functions q_ex(r; y, a^2),q_r(r; y, a^2) coalesce. At this point let us add that another common points of functionsq_ex(r; y, a^2),q_r(r; y, a^2), as well as of the functions q_ex1(r; y, a^2),q_r(r; y, a^2), are also given byy=y_h(r; a^2),i. e. they are located at the event horizons.The reality conditions of the functions y_ex(ex)±(r; a^2) reada^2≤ a^2_r(ex(ex±))+(r) a^2_r(ex(ex±))(r)≤ a^2,0<r≤r̂,anda^2≤ a^2_r(ex(ex±))(r) a^2_r(ex(ex±))+(r)≤ a^2,r̂≤ r,wherea^2_r(ex(ex)±)+(r)≡ (+2√(3)-3)r^2 anda^2_r(ex(ex)±)(r)≡3/4r.The marginal radius r̂ has value r̂=2√(3)+3/4=1.61603=r_crit and it holdsa^2_r(ex(ex)±)+(r̂)=a^2_r(ex(ex)±)(r̂)=a^2_crit=1.21202,where a^2_crit corresponds to local maximum of the function a^2_ex(h)+(r) (see e. g. <cit.> for details).The functions y_ex(ex)±(r; a^2) have the divergency point at r=0 and y_ex(ex)±(r; a^2)→±∞ for r→ 0. For r→∞ we find that y_ex(ex)+(r; a^2)→∞ and y_ex(ex)-(r; a^2)→ 0 from above.The zero point of y_ex(ex)(r; a^2) is at r=3 and the function is increasing for all r>0.Zeros of the functions y_ex(ex)±(r; a^2) are given bya^2=a^2_z(ex(ex)±)(r)≡ r(r^2-3r+3). The condition for stationary points∂ y_ex(ex)±(r; a^2)/∂ r=0leads toa^4+a^2r(2r-1)+r^3(r-3)=0,which can be solved with respect to a^2 with the same result as given by (<ref>). However, substitution into the second derivative concurrently with the requirement y_ex(ex)±(r; a^2)>0 implies ∂^2 y_ex(ex)±(r; a^2=a^2_ex(h)+(r))=0,therefore the functiona^2_inf(ex(ex)±)+(r)≡ a^2_ex(h)+(r)determines the loci of the inflex points of the functions y_ex(ex)±(r; a^2).If we compare the asymptotic behaviour of all characteristic functions y(r; a^2), we find that following inequality is satisfied: 1/a^2>y_ex(ex)(r; a^2)>y_ex(ex)-(r; a^2)>y_h(r; a^2)>y_d(r; a^2)>y_d(ex)(r; a^2)>0>y_z(ex)(r; a^2)>-1/a^2 as r→∞.InFig.5 we present all the characteristic functions related to spin parameter governing the effective potential on the lowest level:* a^2_z(h)(r) * a^2_ex(h)+(r)=a^2_ex(z(ex))+(r)=a^2_inf(ex(ex)±)+(r) * a^2_max(d)(r) * a^2_max(d(ex))(r) * a^2_z(z(ex)+)(r) * a^2_r(ex(ex)±)+(r) * a^2_r(ex(ex)±)(r) * a^2_z(ex(ex)±)(r). These functions determine the behaviour of the characteristic functions related to the cosmological parameter, characterizing the functions q(r;y,a^2) and then effective potentials on the higher level:* y_h(r; a^2) * y_d(r; a^2) * y_ex(r)(r; a^2)=y_ex(ex)(r; a^2) * y_d(ex)(r; a^2) * y_z(ex)+(r; a^2) * y_ex(ex)±(r; a^2).From the significance of the individual characteristic functions a^2(r) depicted in Fig. 5, one can infer that there are just two values of a^2 being of particular importance and leading to qualitatively different behaviour of the functions y(r; a^2): a^2=1– the common local maximum of the functions a^2_z(h)(r) and a^2_z(z(ex))(r) at r=1, which coincides with the inflection point of the function a^2_z(ex(ex)±)(r; a^2) and with the intersection with the curve a^2_ex(h)+(r) a^2=a^2_crit=1.21202– the local maximum a^2_ex(h)+(r) which is the intersection of the curves a^2_r(ex(ex)±)+(r), a^2_r(ex(ex)±)(r) and a^2_max(d(ex))(r).The graphs of characteristic functions y(r; a^2) depicted for some values of spin parameter a representing cases 0<a^2<1,1<a^2<a^2_crit and a^2_crit<a^2 are presented in Fig. 6. In general, behaviour of the characteristic functions q_r(r; y, a^2) and q_ex(r; y, a^2) will be qualitatively different, if for the parameter a being fixed we take the y- values from different intervals, which are limited by intersections and/or extrema of the characteristic functions y(r; a^2) that are demonstrated in Fig. 6. We therefore need to determine curves y(a^2) that separate the (a^2y)-plane into regions that correspond to that different behaviour of the characteristic functions q_r(r; y, a^2) and q_ex(r; y, a^2). The number of these functions is substantially lowered by the fact that all the local extrema are multiple intersections with other curves and coincide with other extrema. Moreover, as explained bellow, the behaviour of the characteristic functions q_r(r; y, a^2) and q_ex(r; y, a^2) in their negative values we can omit as irrelevant for the character of the photon motion. The functions we need are the following:* y_max(h)(a^2)=y_max(z(ex)+)(a^2)=y_inf(ex(ex)-)(a^2)= y_d(ex)h(z(ex)+)ex(ex)-(a^2) * y_min(h)(a^2)=y_min(z(ex)+)(a^2)=y_inf(ex(ex)+)(a^2)= y_d(ex)h(z(ex)+)ex(ex)+(a^2) * y_max(d)(a^2)=y_dd(ex)(z(ex)+)ex(ex)(a^2) * y_d(z(ex)+)(a^2) * y_max(d(ex))(a^2)=y_d(ex)ex(ex)-(a^2) * y_ex(ex)(ex(ex)+)(a^2) Here the dashes between two labels denote affiliation to intersection of appropriate functions (it can be proved that there are no other intersections of these functions than that shown in Fig.6). These functions are projections of extremal values or intersections of characteristic functions y(r; a^2) into (a^2y)-plane and they are demonstrated in Fig. 7.The functions y_ex(h)(a^2) divide the parameter plane (a^2y) into regions describingblack hole and naked singularity spacetimes, the curve y_max(d)(a^2) divides spacetimes with so called divergent and restricted repulsive barrier of photon motion. A detailed discussion of these functions have been performed e. g. in <cit.> and will not be repeated here. The significance of the remaining functions can be understood from the depiction of the characteristic functions q_r(r; y, a^2),q_ex(r; y, a^2) in Fig 8. They are given parametrically by appropriate functions a^2(r),y(r; a^2(r)) with r being the parameter:*the functions y_max(h)(a^2) andy_min(h)(a^2) are both determined by a^2_ex(h)+(r) and y_h(r; a^2=a^2_ex(h)+(r)); * y_max(d)(a^2) we obtain from a^2_max(d)(r) with y_d(r; a^2=a^2_max(d)(r)); *the curve y_max(d(ex))(a^2) is given by functions a^2_max(d(ex))(r) and y_d(ex)(r; a^2=a^2_max(d(ex))(r)); * y_dz(ex)+(a^2) is determined by a^2_dz(ex)+(r)≡r/8(1-4r+√(40r+1)) and y_d(r; a^2=a^2_dz(ex)+(r)), where the function a^2_dz(ex)+(r) is a solution of y_d(r; a^2)=y_z(ex)+(r; a^2) with respect to parameter a^2; all such functions are obtained by analogous manner; * y_ex(ex)ex(ex)+(a^2) are constructed from a^2_ex(ex) ex(ex)±(r) ≡r/2(4r^2-12r+3±√(16r^4-96r^3+156r^2-36r+9)) and y_ex(ex)(r; a^2=a^2_ex(ex) ex(ex)+(r)); There exist another functions y(a^2), corresponding to intersections of the characteristic functions y(r; a^2), which are not displayed in Fig.7. The reason is that all the functions y(a^2) lie under the curve y=1/a^2, and thus we have to take into account the restriction q≥-a^2 (see Section 1). Therefore, the changes of the characteristic functions q_r(r; y, a^2),q_ex(r; y, a^2)in values under this limit can be omitted as irrelevant. Moreover, we can easily show that in the case q<0, the restrictions (<ref>) imposed on the latitudinal motion yields stronger constraints on the value X than that given by the relations (<ref>), (<ref>), (<ref>) conditioning the reality of the radial motion. Indeed, for any triad (q<0,y,a^2) there is no intersection of the curves X=X_±(r; q, y, a)with the lines X^θ_min(-),X^θ_max(+), where X^θ_min(-),X^θ_max(+) are extrema of the functions X^θ(m; q, y, a) introduced in Sec.1 (see Fig. 9 e-g,ω). To verify this, it is convenient to regard the curves X=X_±(r; q, y, a) as q=const-slices of the surface q=q_max(r; X, a, y), whereq ≤ q_max(r; X, a, y)≡(r^2-a X)^2/Δ_r-X^2is an alternative expression of the reality condition R(r; X, q, y, a)≥0,,and search instead for intersections of surfaces q=q_max(r; X, y, a) and q=q_min(X, y, a), defined by relations (<ref>) - (<ref>). We therefore solve two equations - q_max(r; X, y, a^2)=q_1(X), with the resultX=r^2/a,and q_max(r;X, a, y)=q_2(X; a, y), which givesX_1,2=I^2r^2+2(a^2y-1)Δ_r∓ 2I√(-2rΔ_r)/a(I^2-4yΔ_r).The solution (<ref>)yields X>0, which, however, does not apply to the case q<0 for y<1/a^2.Moreover, this solution represents touching points of the surface q_r(r; X, y, a^2) with the parabolic surface q_1(X) at X=+√(-q), and hence can be omitted even in the case y≥ 1/a^2, since theses values lie in the region forbidden by the relations (<ref>)-(<ref>). The solutions (<ref>) are evidently irrelevant, since in stationary regions Δ_r>0 they are imaginary.The above analysis shows that in the case q<0 the 'potentials' X_±(r; q, y, a) have values in regions forbidden by reality conditions of the latitudinal motion and hence play no role at all. The limits for impact parameter X of photons with q<0 are thus given by relation (<ref>); photons satisfying the relation (<ref>) have thus no turning points of the radial motion. In the rest of this treatise we can thus focus on the behaviour of the characteristic functions for q≥ 0.In Fig. 8 we present all possible variants of behaviour of the characteristic functions q(r; y, a^2). These variants involve casesI:y ≤ y_dz(ex)+(a^2) for a^2≤ 0.5 II:y_dz(ex)+(a^2)≤ y≤ y_max(d)(a^2) for a^2≤ 0.5, or y≤ y_max(d)(a^2) for 0.5 ≤ a^2≤ 1, or y_min(h)(a^2)≤ y≤ y_max(d)(a^2) for 1≤ a^2≤ 1.08316; III:y_max(d)(a^2)≤ y ≤ y_max(h)(a^2) for a^2≤ 1.08316, or y_min(h)(a^2)≤ y ≤ y_max(h)(a^2) for 1.08316≤ a^2≤ 1.21202=a^2_crit; IVa:y≤ y_min(h)(a^2) for 1≤ a^2≤ 1.08316, or y≤ y_max(d)(a^2) for 1.08316≤ a^2≤ 1.28282, or y≤ y_(ex(ex)+)(ex(ex)-)2(a^2) for 1.28282≤ a^2≤ 6√(3)-9=1.3923; IVb:y_(ex(ex)+)(ex(ex)-)2(a^2)≤ y≤ y_max(d)(a^2) for 1.28282≤ a^2≤ 1.3923, or y≤ y_max(d)(a^2) for 1.3923≤ a^2≤ 9, or y_ex(ex)ex(ex)+(a^2)≤ y≤ y_max(d)(a^2) for a^2≥ 9; V:y≤ y_ex(ex)ex(ex)+(a^2) for a^2≥ 9; VIa:y_max(d)(a^2)≤ y≤ y_min(h)(a^2) for 1.08316≤ a^2≤ 1.21202, or y_max(d)(a^2)≤ y≤ y_(ex(ex)+)(ex(ex)-)2(a^2) for 1.21202≤ a^2≤ 1.28282; VIb:y_(ex(ex)+)(ex(ex)-)2(a^2)≤ y≤ y_max(d(ex))(a^2) for 1.21202≤ a^2≤ 1.28282, or y_max(d)(a^2)≤ y≤ y_max(d(ex))(a^2) for a^2≥ 1.28282; VII:y_max(h)(a^2)≤ y≤ 1/a^2 for a^2≤ 1.21202, or y_max(d(ex))(a^2)≤ y≤ 1/a^2 for a^2≥ 1.21202; VIII:y≥ 1/a^2; Now it remains to assign to each region of the (a^2y)-plane functions q(y, a^2), which by themselves represent marginal values of the parameter q corresponding to some qualitative shift in the behaviour of the potentials X_±(r; q, y, a). In the regions I, II, which describe black hole spacetimes with divergent repulsive barrier, we have to compare the two local maxima q_max(ex+)(y, a^2) located under the inner horizon and q_max(ex)(y, a^2)=q_min(r)(y, a^2) between the outer and cosmological horizons respectively (see Figs. 8a,b). The function q_max(ex+)(y, a^2) is given parametrically by functions y_ex(ex)+(r; y, a^2) and q_ex(r; y=y_ex(ex)+(r; y, a^2), a^2) with r being the parameter, similarly q_max(ex)(y, a^2) is given by y_ex(ex)(r; y, a^2) and q_ex(r; y=y_ex(ex)(r; y, a^2), a^2).The extrema function q_max(ex)(y, a^2) diverges at the curve y_max(d)(a^2), which forms boundary between regions II-III and IV–VI, i. e. q_max(ex)(y=y_max(d)(a^2), a^2) → +∞ (c. f. Figs. 8b, 8c and 8d, 8f). In the region III, corresponding to black hole spacetimes with the restricted repulsive barrier, only local maximum q_max(ex+)(y, a^2) located under the inner horizon remains.As can be seen from the behaviour of the characteristic functions in Fig. 6b, for y → y_min(h)(a^2) from above, the 'inner' local maximum of q_(ex)(r; y, a^2), determined byy_ex(ex)+(r; y, a^2), approaches from the left its divergency point given by y=y_d(ex)(r; a^2), where q_ex→ -∞ (Fig. 8b), so that q_ex(r;y=y_min(h)(a^2), a^2) becomes continuous.For y ≤ y_min(h)(a^2), the divergency of the function q_ex(r; y, a^2) appears again with q_ex→ +∞ and a local minimum has formed on the right (cf. Figs. 8b, 8d). Hence the curve y_min(h)(a^2) forms a boundary on which the local maxima q_max(ex+)(y, a^2) convert into local minima. We denote them q_min(ex±)(y, a^2), since, as follows from relations (<ref>)-(<ref>), for 1.125≤ a^2 ≤ a^2_crit and y≤ y_(ex(ex)+)(ex(ex)-)1(a^2)≡8a^2-9/8a^4 ,or a^2_crit≤ a^2 ≤ 1.3923 andy≤ y_(ex(ex)+)(ex(ex)-)2(a^2)≡√(3(2√(3)-3)/a^6)-1/a^2 ,they are given by y_ex(ex)-(r; y, a^2). The function y_(ex(ex)+)(ex(ex)-)1(a^2) is given parametrically by a^2_r(ex(ex)±)(r) and, e.g., by y_ex(ex)-(r; y, a^2=a^2_r(ex(ex)±)(r)), and the function y_(ex(ex)+)(ex(ex)-)2(a^2) by a^2_r(ex(ex)±)+(r) and y_ex(ex)-(r; y, a^2=a^2_r(ex(ex)±)+(r)). The analytical expressions in (<ref>), (<ref>) can be then derived by eliminating the radius r. Both these functions have their relevant parts entirely in regions IV,VI. The function y_(ex(ex)+)(ex(ex)-)2(a^2) corresponds to the local minima of the potential X_-(r; q, y, a) reaching the value X=-a, i. e., ℓ =0. The photons corresponding to these minima, and having appropriate motion constant q, persist on 'spherical' orbits with r=const, which are crossing the spacetime rotation axis alternately above both poles. In the next we shall call them 'polar' spherical orbits – in the following section we shall see that such polar spherical orbits form a border surface between prograde and retrograde spherical photon orbits, as related to the locally non-rotating observers. The function y_(ex(ex)+)(ex(ex)-)2(a^2) has therefore an important meaning, since it represents a boundary between regions of qualitatively different KdS spacetimes in the (a^2-y)-plane. From this point of view, the function y_(ex(ex)+)(ex(ex)-)2(a^2) creates another qualitative shift in the parameter plane (a^2-y) with regard to character of the photon motion, however, no qualitative shift in mathematical properties of characteristic functionsq_r(r; y, a^2) and q_ex(r; y, a^2) in their relevant values q≥ 0. The parts of the (a^2-y)-plane corresponding to different behaviour of the characteristic functions are in Fig. 7 distinguished by Roman numerals, the curve y_(ex(ex)+)(ex(ex)-)2(a^2) then induces an additional division a/b.Further we have to relate the minima q_min(ex±)(y, a^2) with the maxima q_max(ex)(y, a^2)=q_min(r)(y, a^2). In the region V, the minima of q_ex(r; y, a^2) coalesce with the minima of q_r(r; y, a^2) (Fig. 8e). We therefore have to compare the minima function q_min(ex)(y, a^2)=q_min(r)(y, a^2) determined by y_ex(ex)(r; y, a^2) and, e.g., q_ex(r; y=y_ex(ex)(r; y, a^2), a^2), with the maxima function q_max(ex+)(y, a^2) parametrized by y_ex(ex)+(r; y, a^2) and q_ex(r; y=y_ex(ex)+(r; y, a^2), a^2). The boundary curve y_ex(ex)ex(ex)+(a^2) then represents such combinations of parameters a^2,y for which the local extrema of q_ex(r; y, a^2) have coalesced into an inflection point. For parameters from the region VI, corresponding to naked singularity spacetimes with restricted repulsive barrier (as well as from the remaining regions VII, VIII), the function q_ex(r; y, a^2) has one local minimum (Fig. 8f), and we therefore construct a function q_min(ex±)(y, a^2) determined by functions y_ex(ex)±(r; a^2) and q_ex(r; y=y_ex(ex)±(r; a^2), a^2), where the minus sign has to be chosen for 1.17007 ≤ a^2 ≤ a^2_crit and y_max(d)(a^2)≤ y ≤ y_(ex(ex)+)(ex(ex)-)1(a^2), or a^2_crit≤ a^2 ≤ 1.2828 and y_max(d)(a^2) ≤ y ≤ y_(ex(ex)+)(ex(ex)-)2(a^2) (see Fig.7). For y→ y_max(d(ex))(a^2) and a^2≥ a^2_crit there is q_min(ex+)(y, a^2)→ +∞,, and for y> y_max(d(ex))(a^2), i.e. in the region VII, it converts into the local maximum (c.f. Figs 8f, 8g). The transition into the region VII from the region III can be inferred from comparison of Fig. 8c with Fig. 8g. Therefore, in the region VII we have to follow up the values of the function q_max(ex+)(y, a^2) determined by the functions y_ex(ex)+(r; a^2) and q_ex(r; y=y_ex(ex)+(r; a^2), a^2). The functionsq(y, a^2) are demonstrated in Fig. 9. With the knowledge of the behaviour of the extremal values q_min/max(ex)(y, a^2) at each region of the (a^2y)-plane, we can finally construct all qualitatively different types of the behavior of the effective potentials X_±(r; q, y, a). They are presented in Figure 10 for appropriately chosen representative combinations (q,y,a^2). § SPHERICAL PHOTON ORBITS AND CLASSIFICATION OF THESPACETIMES DUE TO PROPERTIES OF THE PHOTON MOTION In the following section we demonstrate by using the behaviour of the effective potentials X_±(r; q, a, y) that the null geodesics create qualitatively different structures in the various cases ofspacetimes with the spacetime parameters chosen from different parts of the (a^2y)-plane labelled by numerals I-VIII. Hence the regions of the spacetime parameter space of these labels can be considered as representatives of the classification of thespacetimes due to the photon motion (null geodesics). Similarly as in <cit.>, there are three (four) criteria used – the main criterion for the classification is the existence (number) of the event horizons. The other differentiating factors follow from the nature of photon motion. First, there is some kind of repulsive barrier defending a light to reach the ring singularity, which is always created in its vicinity for photons with q>0. However, a similar barrier can emerge between the outer black hole horizon and the cosmological horizon in black hole and naked singularity spacetimes, repelling photons towards one of these horizons. In the naked singularity spacetimes, occurrence of an additional barrier, which reflects photons towards the ring singularity, leads to occurence of the phenomenon of bound photon orbits. Such bound photon orbits are not present in the case of the black hole spacetimes. The presence and character of this barrier we take as another criterion in the following classification. The other aspect that authorizes us to make such distinction between the KdS spacetimes will be the existence and character of the spherical photon orbits. In the KdS naked singularity spacetimes the bound orbits are concentrated around the stable spherical photon orbits. §.§ Spherical photon orbitsThe spherical photon orbits are determined by the conditions R(r)=0 and dR/dr=0 that have to be solved simultaneously. The physically acceptable solution is governed by the relations for the photon motion constants X and q that are expressed as functions of the radius r and the spacetime parameters a,y, and take the form X =X_sph(r)≡r[(1-a^2y)r^2-3r+2a^2]/a[yr(2r^2+a^2)-r+1],q = q_sph(r)≡ q_ex(r; y, a^2),where the function q_ex(r; y, a^2) is defined by the relation (<ref>). These solutions governing the spherical photon orbits are allowed in the interval of radii limited by the equatorial photon circular orbits. Stability of the spherical photon orbits relative to radial perturbations is determined by the sign of the expressiond^2R/d^2r = 12r^2[1+y(X^2+q)]+2(X^2+q)(a^2y-1)-4aX,evaluated at appropriate radii. It can be shown that local maxima of the potential X_+ and local minima of X_- correspond to unstable orbits in the black hole spacetimes. However, local minima of X_+ and local maxima of X_- represent stable orbits, which occur in the naked singularity spacetimes.It is useful to relate the parameters (wave vector components) of photons orbiting along the spherical null geodesics to the locally non-rotating frames (LNRFs) that are the most convenient frames for description of physical processes in the Kerr(dS) spacetimes <cit.>. The LNRF tetrad of differential one-forms is given by the relationsω^(t) = √(Δ_rΔ_θρ^2/I^2 A) t,ω^(r) = √(ρ^2/Δ_r) r, ω^(θ) = √(ρ^2/Δ_θ)θ, ω^(ϕ) = √(A sin ^2 θ/I^2 ρ^2)(ϕ-Ω_LNRF t), the corresponding tetrad of dual vectors readse_(t) = √(I^2A/Δ_r Δ_θρ^2)( ∂/∂ t+Ω_LNRF∂/∂ϕ) , e_(r) = √(Δ_r/ρ^2)∂/∂ r, e_(θ) = √(Δ_θ/ρ^2)∂/∂θ, e_(ϕ) = √(I^2 ρ^2/A sin^2θ)∂/∂ϕ.The wave-vector components related to the LNRFs are then determined by the relations k^(a)= ω^(a)_μk^μ,k_(b)=k_ν e_(b)^ν,hencek^(t) = I E √(A/Δ_r Δ_θρ^2) (1-Ω_LNRF(X+a)), k^(r) = ±I E/√(Δ_rρ^2)[(r^2-aX)^2-Δ_r(X^2+q)], k^(ϕ) = I E √(ρ^2/A sin^2θ)(X+a), k^(θ) = ±I E/√(Δ_θρ^2)× √((X^2+q)Δ_θ-(acos^2θ+X)^2/sin^2θ),whereA = (r^2+a^2)^2-a^2Δ_rsin^2θ,and Ω_LNRF = a[(r^2+a^2)Δ_θ-Δ_r]/A is the angular velocity of the LNRFs related to distant static observers.In order to determine the orientation of the spherical orbits, we have chosen as the azimuthal direction indicator the sign of the ratio k^(ϕ)/k^(t). If we define the directional angle Ψ in such a way that Ψ=0 for motion in the direction of the latitudinal tetrad vector e_(θ), while Ψ=π/2 for motion in the direction of the azimuthal tetrad vector e_(ϕ), then k^(ϕ)/k^(t)=sinΨ and we find the relation sinΨ = ρ^2√(Δ_rΔ_θ)/A sinθX+a/1-Ω_LNRF(X+a). If the sign of sinΨ is positive, we call the spherical orbit prograde, if it is negative, we call the spherical orbit retrograde. Special case of limiting spherical orbits corresponds to the equatorial circular orbits that are again co-rotating (prograde), respectively counter-rotating (retrograde). It can be shown that the sign of the directional angle remains fixed at any latitude of any particular spherical orbit, i.e., the locally non-rotating observers see the photon motion in fixed azimuthal direction. [However, we have to note that, similarly to the case of Kerr black holes <cit.>, the sign of the variation of the azimuthal coordinate can be changed at some latitude, if related to distant observers.] Since the functions A, Ω_LNRF are positive <cit.>, it is clear that all photons with X<-a (ℓ<0) are retrograde. However, photons with X>1/Ω_LNRF-a(ℓ>1/Ω_LNRF>0)can be retrograde as well. Considering in such a case the relation for the tetrad LNRF component (<ref>), we can see that in order to keep for k^(t) the standard physical meaning, i.e., k^(t)>0, we have to put E<0.[Keeping E>0 means k^(t)<0, i.e., a photon in negative-root state with time evolution directed to past – for details see <cit.>.] In order to find conditions under which such a situation occurs, it is convenient to express from the alternate relation sinΨ = ρ^2√(Δ_rΔ_θ)/A sinθℓ/1-Ω_LNRFℓthe impact parameter ℓ in the form ℓ=Asinψsinθ/√(Δ_rΔ_θ)ρ^2+AΩ_LNRFsinψsinθand reverse the problem by searching for conditions, under which there is ℓ>0. Such a relation is evidently fulfilled if sinψ >0, i.e., the positive impact parameters pertain to prograde photons. However, there is another possibility, to have sinψ <0 together with -1≤sinψ < -ρ^2 √(Δ_rΔ_θ)/A Ωsinθ,from which it followsAΩsinθ - ρ^2 √(Δ_rΔ_θ)≥0. However, the last inequality can be written in the form A I^2ρ^2 g_tt≥0,which implies g_tt≥0.Hence, such a situation can occur only in the ergosphere. Of course, the impact parameter of such photons must fulfil the condition (<ref>). The function 1/Ω-a=r^2(r^2+a^2)Δ_θ+a^2Δ_rcosθ^2/a[(r^2+a^2)Δ_θ-Δ_r] has for Δ_r=0 common points with the potentials X_± given by the relation (<ref>). There are no other intersections with the potentials, hence, the reality condition of the radial motion together with (<ref>) imply X>X_+>0. Therefore, the motion of photons with negative energy E, which appear to be retrograde in the LNRFs, is governed by effective potentials X_+(r; q, a, y) with positive values. Using the properties of the effective potentials X_±(r; q, a, y), we can identify the radii r=r_0 of the spherical photon orbits as loci of the local extrema of the effective potentials and determine their stability and orientation as described above. At each allowed radius r_0, located between the radii of the equatorial photon circular orbits, we can assign corresponding limits θ_min,θ_max on the latitudinal motion by solving the equationM(m; X_sph(r_0), q_sph(r_0), y, a)=0,which due to he results of Section 3 has one real positive root m_0, since q_sph(r_0)≥ 0 (q_sph(r_0)=0 for r_0=r_ph±, i.e., equatorial circular co-rotating or counter-rotating photon orbit). The marginal latitudes (turning points of the latitudinal motion) then read θ_min=arccos√(m_0), θ_max=π-arccos√(m_0) ,for details see the discussion of the latitudinal motion in Section 3. We can thus easily determine for a spherical orbit at an allowed radius r_0 the impact parameters of the orbit, the extension of the latitudinal motion, and the orientation of the azimuthal motion.§.§ ClassificationIn the following classification we introduce ten classes of thespacetimes and demonstrate properties of the photon motion using the spherical photon orbits that serve as crucial characteristic for the classification. We give the loci of the spherical photon orbits and their extension in latitude, stability against radial perturbations, and orientation of their azimuthal motion. The classification is represented by family of characteristic figures corresponding to the separated classes of the KdS spacetimes. For easy interpretation of the family of the figures representing the classification, we introduce an auxiliary Fig.<ref> commented with detailed explanatory notes. In order to fully and clearly characterize the KdS spacetimes and their horizon and ergosphere structure, and to demonstrate the spheroidal character of the applied coordinate system, we use now the so called Kerr-Schild coordinates x,y,z that are connected to the Boyer-Lindquist coordinates r,θ by the relationsx^2 + y^2 = (r^2 + a^2)sin^2θ , z^2 = r^2cos^2θ. In the figures we, of course, use the meridional sections of y=0. The characteristics of the classes of the KdS spacetime according to the photon orbits are presented as follows. Class I: Black hole spacetimes with the divergent repulsive barrier of the radial photon motion, having one equatorial counter-rotating circular unstable orbit with negative energy located under the inner black hole horizon (0<r<r_-), which is limiting the range of the spherical photon orbits with negative energy. There exist stable orbits, corresponding to local minima X_min(+) of the effective potential X_+ at 0<r<r_max(ex)1, and unstable orbits, corresponding to local maxima X_max(+) of X_+ at r_max(ex)1<r<r_z(ex)1 for 0<q<q_max(ex+)(y, a^2) (Fig. <ref>a). We denote asr_min/max(ex) the local extrema, and as r_z(ex) the zero point, of the function q_ex(r; y, a^2) hereafter. Such a structure is present under the inner horizon of any KdS black hole spacetime. Outside the ergosphere, one unstable co-rotating equatorial circular orbit, located at r=r_ph+=r_z(ex)2, and polar spherical orbit with r=r_pol,r_ph+<r_pol, limit the range of unstable prograde spherical orbits given by the local minima X_min(-) of the effective potential X_-, for which X_min(-)>-a. The radius of the polar spherical orbit is found by solving X_-(r_pol;q_ex(r_pol))=-a. The counter-rotating equatorial circular orbit at r=r_ph-=r_z(ex)3 gives the limit of region of unstable retrograde spherical orbits, given by the local minima X_min(-)<-a, and maxima X_max(+)<-a for 0<q<q_max(ex)(y, a^2), such that r_pol<r_ph-. Class II: Black hole spacetimes with the same features as in the class I, but now the ergosphere enters the region of the spherical photon orbits (Fig. <ref>b). No spherical orbit is fully immersed in the ergosphere and photons at all the spherical orbits have positive energy. The presence of the ergosphere in region of the spherical photon orbits influences character of the light escape cones <cit.>. Class III: Black hole spacetimes with the restricted repulsive barrier of the radial photon motion. The ergosphere spreads over all radii. The prograde spherical orbits are given by the local minima X_min(-)>-a at r_ph+<r<r_pol, while the retrograde spherical orbits with E>0 are given by the minima X_min(-)<-a at r_pol<r<r_d(ex). The spherical orbits given by the local maxima X_max(+) (see Figs 10k-n) at r_d(ex)<r<r_ph-, where r_d(ex) denotes the divergence point of q_ex(r; y, a^2) (see Fig. 8c), are fully immersed in the ergosphere. Such areas are drawn in Figs <ref> in green and the spheres with r=r_d(ex) as full/dashed green ellipses. Photons in such regions have E<0. Class IVa: Naked singularity spacetimes with divergent repulsive barrier of the photon motion. At radii 0<r<r_d(ex) (Fig.8d can be used for illustration), there are local minima of the potential X_+ (for illustration use Figs. 10p-r) corresponding to the stable retrograde spherical orbits with negative energy (E<0) (Fig. <ref>d). The stable retrograde orbits with positive energy corresponding to the local maxima of X_- are at r_d(ex)<r<r_pol1. These maxima exceed the value X_max(-)=-a at r_pol1<r<r_min(ex), where they yield stable prograde spherical orbits. At radii r_min(ex)<r<r_pol2, there are the local minima of X_- with values X_min(-)>-a,– these radii are thus occupied by the unstable prograde orbits. The local minima of X_-, and the local maxima of X_+ at r>r_pol2, correspond to the unstable retrograde orbits. There are thus two polar spherical orbits enclosing region of prograde orbits – the inner at the radius r=r_pol1 being stable, the outer at the radius r=r_pol2 being unstable. Class IVb:Naked singularity spacetimes with the same features as in the class IVa, but the two polar orbits have coalesced, therefore, there are no prograde spherical orbits (Fig. <ref>e).Class V: Naked singularity spacetimes having the structure of the spherical orbits corresponding to the previous case (Fig. <ref>f), but with is a small region of bound orbits for photons with motion constants q_min(ex)(y, a^2)<q<q_max(ex+)(y, a^2) and X between the appropriate local extrema of X_+ (c. f. Figs. 10q, u), which is not contained in the other cases.Class VIa: Naked singularity spacetimes with the restricted repulsive barrier of the radial photon motion. For 0<r<r_d(ex)1 (the function q_ex(r; y, a^2) has two divergence points r_d(ex)1, r_d(ex)2,– see Fig. 8f for preview) the minima of X_+ correspond to stable retrograde orbits with E<0; for r_d(ex)1<r<r_pol1, there are the local maxima of X_- with values X_max(-)<-a giving retrograde orbits with E>0. The local minima of X_- at r_pol1<r<r_min(ex) give the stable prograde orbits. At radius r=r_pol1 the stable polar orbit is located. For r_min(ex)<r<r_pol2, the function X_- has minima with values X_min(-)>-a giving the unstable prograde orbits. For r_pol2<r<r_d(ex)2, they correspond to the unstable retrograde orbits. At radius r=r_pol2, the unstable polar orbit exists. The local maxima of the function X_+ at r_d(ex)2<r<r_ph- correspond to the retrograde unstable spherical orbits with E<0. Class VIb: The structure of the spherical orbits corresponds to the class VIa with an exception that the local extrema of the potential X_- have values X<-a,, implying that there are no polar spherical orbits, neither the prograde spherical orbits (Fig. <ref>h).Class VII: Naked singularity spacetimes with the restricted repulsive barrier of the radial photon motion having stable retrograde spherical orbits at 0<r_max(ex) corresponding to local minima of X_+ (Fig. 10α), and unstable retrograde spherical orbits at r_max(ex)<r<r_ph- corresponding to local maxima of X_+. All these spherical orbits, including the counter-rotating equatorial circular orbit at r=r_ph-, correspond to photons with E<0. Class VIII: Special class of the naked singularity spacetimes demonstrating the same features of the radial motion of photons with q≥ 0 as the class VII, but differing from all previous cases by the existence of null geodesics for arbitrary q<0. The allowed values of the impact parameter X are for q<0 confined to the intervals X<X^θ_max(+)<0 or X>X^θ_min(-)>0 (see Section 3). The potentials governing the radial photon motion are fully immersed in the forbidden region (Figs. 10(γ), (δ)), thus in the radial direction the photons with such parameters move freely in the whole range between the ring singularity and the cosmological horizon.§ CONCLUSIONS We can summarize our results by the following concluding remarks. *In any kind of the black hole spacetimes, there are no radially bound null geodesics in the stationary region, i.e., the trajectory of a photon has at most one turning point in radial direction between the outer and cosmological horizon, or the photons can move freely between the outer black hole and the cosmological horizons. However, such bounded photon orbits exist in each naked singularity spacetime for photons with parameters q>0 and X chosen appropriately. *No photons with q>0 can reach the ring singularity at r=0 in any of thespacetimes. *In thespacetimes of classes I-VII, i.e., with the spacetime parameters satisfying the condition y<1/a^2, there is a lower limit q=-a^2 of the parameter q<0, for which the photon motion is allowed. The range of the allowed values of the impact parameter X is then an interval given by the relations (<ref>) - (<ref>). Photons with such tuned parameters have no turning point in radial direction, since the effective potential lies entirely in the forbidden region (Figs 10(e)-(g)). Further, by the results of Section 3, only such photons execute the vortical motion, or their trajectory lies completely on the cones of θ = constant. We can therefore reject possibility of existence of vortical photon motion of constant radius, or off-equatorial circular photon orbits. *In thespacetimes of class VIII (y>1/a^2), the photon motion is allowed for any q<0. The permissible values of the parameter X are then two disjunct unlimited intervals determined by the relation (<ref>.) In the extreme case y=1/a^2, it must be q≥ -a^2 again and for negativeq, the parameter X can take less than certain negative value given by (<ref>). The consequences for photon motion are then the same as in previous note (Figs 10(γ)-(δ)). *In thespacetimes with the divergent repulsive barrier of the radial photon motion, there exists a critical value q_max(ex)(y, a^2), for which this barrier becomes impermeable between the outer black hole horizon and cosmological horizon, or, in naked singularity spacetimes, between the ring singularity and cosmological horizon, for photons with any impact parameter X. In spacetimes with the restricted repulsive barrier of the radial photon motion, the height of this barrier slowly grows with increasing parameter q, but stays finite for any q>0 (Figs 10(n), 10(y)). *In thespacetimes of classes I-III, IVa, VIa, there exist spherical photon orbits, which can be both prograde or retrograde as seen by the family of locally non-rotating observers. Additionally, each of the two types can be stable or unstable with respect to radial perturbations. The regions of spherical orbits of different orientations are separated by the so called polar spherical orbit, at which photons cross the spacetime rotation axis alternately above both poles. In the naked singularity spacetimes of class IVa, VIa, there are two polar spherical orbits, the inner one being stable, the outer one being unstable. *In the spacetimes of classes IVb, V ,VIb, VII, VIII there are no prograde or polar spherical orbits.*In each class of thespacetimes, there exist region, where the effective potential X_+ have positive values. Photons with impact parameter X exceeding this values appear to move in retrograde direction as seen in LNRFs. This region must be located inside the ergosphere, and photons with such impact parameters must have negative energy, E<0. In the black hole spacetimes with the divergent barrier of the radial photon motion, the ergosphere has two parts above the black hole outer event horizon - the inner one, which is limited to the outer vicinity of the outer event horizon, and the outer one, limited to the inner vicinity of the cosmological horizon. In the black hole spacetimes with restricted repulsive barrier of the radial photon motion, the two regions of the ergosphere merge in the equatorial plane, and they spreads at any radii except for certain region in the vicinity of the rotation axis. *In the LNRFs, trajectories of photons moving along any spherical orbit have no turning point of the azimuthal motion. We have thus demonstrated a variety of very extraordinary phenomena related to the photon motion in the KdS spacetimes, of both black hole and naked singularity types. Especially relevant effects are found in the case of spherical photon orbits that can be directly related to the observational phenomena. It is quite interesting that we could expect another interesting phenomena related to the charged Kerr-Newman or Kerr-Newman-de Sitter naked singularity spacetimes (with both the standard electric charge, or the tidal charge of the braneworld models), especially in the case of the so called mining Kerr-Newman spacetimes <cit.>, containing a special type of equatorial stable photon orbits.§ ACKNOWLEDGMENTS Z.S. acknowledges the Albert Einstein Centre for Gravitation and Astrophysics supported by the Czech Science Foundation Grant No. 14-37086G. D.Ch. acknowledges the Silesian University in Opava Grant No. SGS/14/2016. abbrv10Ada-etal:2013:ASTRA:DifLiYoClG C. Adami, F. Durret, L. Guennou, and C. Da Rocha. Diffuse light in the young cluster of galaxies CL J1449+0856 at z=2.07. Astronomy and Astrophysics, 551:A20 (7 pages), Mar. 2013. Ali:2007:PHYSR4:EMPropKadS A. N. Aliev. Electromagnetic properties of Kerr–anti-de Sitter black holes. Phys. Rev. D, 75(8):084041, Apr. 2007. ArP-Muk-Ste:2000:PHYRL: C. Armendariz-Picon, V. Mukhanov, and P. J. Steinhardt. Dynamical solution to the problem of a small cosmological constant and late-time cosmic acceleration. Phys. Rev. Lett., 85(21):4438, 2000. Arra:2014:PHYSR4: I. Arraut. Komar mass function in the de Rham-Gabadadze-Tolley nonlinear theory of massive gravity. Phys. Rev. D, 90:124082, Dec 2014. Arra:2017:Universe: I. Arraut. The astrophysical scales set by the cosmological constant, black-hole thermodynamics and non-linear massive gravity. Universe, 3(2), 2017. Asc:2008:CHIAA:MassSpinBHQPO B. Aschenbach. Measurement of Mass and Spin of Black Holes with QPOs. Chinese Journal of Astronomy and Astrophysics Supplement, 8:291–296, Oct. 2008. Bah-etal:1999:SCIEN: N. Bahcall, J. P. Ostriker, S. Perlmutter, and P. J. Steinhardt. The cosmic triangle: Revealing the state of the universe. Science, 284:1481–1488, 1999. Bak-etal:2007:CEURJP: P. Bakala, P. Čermák, S. Hledík, Z. Stuchlík, and K. Truparová. Extreme gravitational lensing in vicinity of Schwarz­schild–de Sitter black holes. Central European J. Phys., 5(4):599–610, Dec. 2007. Bardeen:1973 J. M. Bardeen. Timelike and null geodesics in the Kerr metric. In C. Dewitt and B. S. Dewitt, editors,Black Holes (Les Astres Occlus), pages 215–239, 1973. BS76: J. Bičák and Z. Stuchlík. On the latitudinal and radial motion in the field of a rotating black hole. Bull. Astronom. Inst. Czechoslovakia, 27(3):129–133, 1976. Bic-Stu-Bal:1989:BAC: J. Bičák, Z. Stuchlík, and V. Balek. The motion of charged particles in the field of rotating charged black holes and naked singularities. Bulletin of the Astronomical Institutes of Czechoslovakia, 40:65–92, Mar. 1989. Bla-Stu:2016:PHYSR4: M. Blaschke and Z. Stuchlík. Efficiency of the keplerian accretion in braneworld kerr-newman spacetimes and mining instability of some naked singularity spacetimes. Phys. Rev. D, 94:086006, Oct 2016. Boh:2004:GENRG2: C. G. Böhmer. Eleven Spherically Symmetric Constant Density Solutions with Cosmological Constant. Gen. Relativity Gravitation, 36:1039–1054, May 2004. Boy-etal:2003:PHYSR4:HoloProtChron E. K. Boyda, S. Ganguli, P. Hořava, and U. Varadarajan. Holographic Protection of Chronology in Universes of the Gödel Type. Phys. Rev. D, 67:106003, 2003. Cal-Kam:2009:NATURE:CosDarkMat R. Caldwell and M. Kamionkowski. Cosmology: Dark matter and dark energy. Nature, 458(7238):587–589, Apr. 2009. Cal-Dav-Ste:1998:PHYRL: R. R. Caldwell, R. Dave, and P. J. Steinhardt. Cosmological imprint of an energy component with general equation of state. Phys. Rev. Lett., 80(8):1582, 1998. Car:1973:BlaHol: B. Carter. Black hole equilibrium states. In C. Dewitt and B. S. Dewitt, editors,Black Holes (Les Astres Occlus), pages 57–214, 1973. l_e_cones: D. Charbulák and Z. Stuchlík. Light escape cones in local reference frames of Kerr- de Sitter black hole spacetimes and related black hole shadows,(to be published in The European Physical Journal C). Che:2008:CHINPB:DkEnGeoMorSchw J.-H. Chen and Y.-J. Wang. Influence of dark energy on time-like geodesic motion in Schwarzschild spacetime. Chinese Physics B, 17(4):1184, 2008. Cru-Oli-Vil:2005:CLAQG:GeoSdSBH N. Cruz, M. Olivares, and J. R. Villanueva. The geodesic structure of the Schwarzschild anti-de Sitter black hole. Classical Quantum Gravity, 22(6):1167–1190, Mar. 2005. deFel:1974:ASTRA: F. de Felice. Repulsive phenomena and energy emission in the field of a naked singularity. Astronomy and Astrophysics, 34:15–19, 1974. deFel:1978:NATURE:InstabNS F. de Felice. Classical instability of a naked singularity. Nature, 273:429–431, June 1978. Far:2016:PDU: V. Faraoni. Turnaround radius in modified gravity. Physics of the Dark Universe, 11:11–15, Mar. 2016. Far-Lap-Pra:2015:JCAP: V. Faraoni, M. Lapierre-Léonard, and A. Prain. Turnaround radius in an accelerated universe with quasi-local mass. Journal of Cosmology and Astroparticle Physics, 2015(10):013–013, Oct. 2015. Gib-Haw:1977:PHYSR4: G. W. Gibbons and S. W. Hawking. Cosmological event horizons, thermodynamics, and particle creation. Phys. Rev. D, 15:2738–2751, May 1977. Gim-Hor:2004:hep-th0405019:GodHolo E. G. Gimon and P. Hořava. Over-Rotating Black Holes, Gödel Holography and the Hypertube, 2004. Gim-Hor:2009:PHYLB:AstVioSignStr E. G. Gimon and P. Hořava. Astrophysical Violations of the Kerr Bound as a Possible Signature of String Theory. Phys. Lett. B, 672:299, 2009. Gu-Cheng:2007:GENRG2:CircLoopKdS Z. Gu and H. Cheng. The circular loop equation of a cosmic string in Kerr–de Sitter spacetimes. Gen. Relativity Gravitation, 39(1):1–7, Jan. 2007. Hac-etal:2010:PHYSR4:KerrBHCoStr: E. Hackmann, B. Hartmann, C. Lämmerzahl, and P. Sirimachan. Test particle motion in the space-time of a Kerr black hole pierced by a cosmic string. Phys. Rev. D, 82(4):044024, Aug. 2010. Hio-Mae:2009:PHYSR4:KerrSpinMeas K. Hioki and K.-i. Maeda. Measurement of the Kerr spin parameter by observation of a compact object's shadow. Phys. Rev. D, 80(2):024042 (9 pages), July 2009. Ior:2009:NEWASTR:CCDGPGrav L. Iorio. Constraining the cosmological constant and the DGP gravity with the double pulsar PSR J0737-3039. New Astronomy, 14(2):196–199, Feb. 2009. Kag-Kun-Lam:2006:PHYLB:SolarSdS V. Kagramanova, J. Kunz, and C. Lammerzahl. Solar system effects in Schwarzschild–de Sitter space-time. Phys. Lett. B, 634(5–6):465–470, Mar. 2006. Kol-Stu:2010:PHYSR4:CurCarStrLoops M. Kološ and Z. Stuchlík. Current-carrying string loops in black-hole spacetimes with a repulsive cosmological constant. Phys. Rev. D, 82(12):125012 (21 pages), Dec. 2010. Kot:1918:ANNPH2:PhyBasEinsGr F. Kottler. Über die physikalischen Grundlagen der Einsteinschen Gravitationstheorie. Annalen der Physik, 361(14):401–462, 1918. Kra:2005:DARK:CCPerPrec G. V. Kraniotis. Precise theory of orbits in general relativity, the cosmological constant and the perihelion precession of Mercury. pages 469–479. Kra:2004:CLAQG: G. V. Kraniotis. Precise relativistic orbits in Kerr and Kerr–(anti-)de Sitter spacetimes. Classical Quantum Gravity, 21:4743–4769, 2004. Kra:2007:CLAQG:Periapsis G. V. Kraniotis. Periapsis and gravitomagnetic precessions of stellar orbits in Kerr and Kerr–de Sitter black hole spacetimes. Classical Quantum Gravity, 24:1775–1808, 2007. Kra:2011:CLAQG: G. V. Kraniotis. Precise analytic treatment of Kerr and Kerr-(anti) de Sitter black holes as gravitational lenses. Class. Quant. Grav., 28:085021, 2011. Kra:2014:GRG: G. V. Kraniotis. Gravitational lensing and frame dragging of light in the Kerr-Newman and the Kerr-Newman-(anti) de Sitter black hole spacetimes. Gen. Rel. Grav., 46(11):1818, 2014. Kra:1998:ASTRJ2: L. M. Krauss. The end of the age problem, and the case for a cosmological constant revisited. Astrophys. J., 501(2):461–466, 1998. Kra-Tur:1995:GENRG2: L. M. Krauss and M. S. Turner. The cosmological constant is back. Gen. Relativity Gravitation, 27(11):1137–1144, Nov. 1995. Kuc-Sla-Stu:2011:JCAP:ToroPerFlRNadS: H. Kučáková, P. Slaný, and Stuchlík. Toroidal configurations of perfect fluid in the reissner-nordström-(anti-)de sitter spacetimes. Journal of Cosmology and Astroparticle Physics, 2011(01):033, 2011. Lak:2002:PHYSR4:BendLiCC K. Lake. Bending of light and the cosmological constant. Phys. Rev. D, 65(8, B):087301, Apr. 2002. Lak-Zan:2016:PHYSR4: K. Lake and T. Zannias. Global structure of Kerr -de Sitter spacetimes. Phys. Rev. D, 92:084003, Oct 2015. Lin:1990:InfCos: A. D. Linde. Particle Physics and Inflationary Cosmology. Gordon and Breach, New York, 1990. Mul-Asch:2007:CLAQG:NonMonoVel A. Müller and B. Aschenbach. Non-monotonic orbital velocity profiles around rapidly rotating Kerr (anti-)de Sitter black holes. Classical and Quantum Gravity, 24:2637–2644, May 2007. Mul:2008:GENRG2:FallSchBH T. Müller. Falling into a Schwarzschild black hole. Gen. Relativity Gravitation, pages 56–+, Feb. 2008. Oli-etal:2011:MODPLA:ChaParRNadS: M. Olivares, J. Saavedra, C. Leiva, and J. R. Villanueva. Motion of charged particles on the Reissner–Nordström (anti)–de Sitter black hole spacetime. Modern Phys. Lett. A, 26(39):2923–2950, Dec. 2011. Ost-Ste:1995:NATURE: J. P. Ostriker and P. J. Steinhardt. The observational case for a low-density universe with a nonzero cosmological constant. Nature, 377(6550):600–602, Oct. 1995. Per-Rom-PeB:2013:ASTRA:AccDiBHModGra: D. Pérez, G. E. Romero, and S. E. Bergliaffa. Accretion discs around black holes in modified strong gravity. Astronomy and Astrophysics, 551:A4(15 pages), Mar. 2013. Cha-Har:2012:PHYSR4:BEConGRStar C. Pierre-Henri and T. Harko. Bose–Einstein Condensate general relativistic star. Phys. Rev. D, 86(6):064011, Sept. 2012. Ade-etal:2014:ASTRA:Planck Collaboration, P. A. R. Ade, N. Aghanim, C. Armitage-Caplan, M. Arnaud, M. Ashdown, F. Atrio-Barandela, J. Aumont, C. Baccigalupi, A. J. Banday, and et al. Planck 2013 results. XII. Diffuse component separation. Astronomy and Astrophysics, 571:A12, Nov. 2014. Pug-Stu:2015:ApJS: D. Pugliese and Z. Stuchlík. Ringed Accretion Disks: Equilibrium Configurations. Astrophys. J. Suppl., 221(2):25, dec 2015. Rez-Zan-Fon:2003:ASTRA: L. Rezzolla, O. Zanotti, and J. A. Font. Dynamics of thick discs around Schwarzschild–de Sitter black holes. Astronomy and Astrophysics, 412(3):603–613, Dec. 2003. Rie-etal:2004:ASTRJ2: A. G. Riess et al. Type Ia Supernova Discoveries at z>1 From the Hubble Space Telescope: Evidence for Past Deceleration and Constraints on Dark Energy Evolution. Astrophys. J., 123:145, 2004. Sche-Stu-Pet:2013:JCAP: J. Schee, Z. Stuchlík, and M. Petrásek. Influence of the cosmic repulsion on the MOND model of the Magellanic Cloud motion in the field of Milky Way. Journal of Cosmology and Astroparticle Physics, 12:026, Dec. 2013. Sch-Zai:2008:0801.3776:CCTimeDelay T. Schücker and N. Zaimen. Cosmological constant and time delay. Astronomy and Astrophysics, 484(1):103–106, June 2008. Ser:2008:PHYSR4:CCLens M. Sereno. On the influence of the cosmological constant on gravitational lensing in small systems. Phys. Rev. D, 77(4):043004, 2008. Zhou-Chen:2011:AstrSpSc: Z. Sheng, C. Ju-Hua, and W. Yong-Jiu. Time-like geodesic structure of a spherically symmetric black hole in the brane-world. Chinese Physics B, 20(10):100401, 2011. Sla-Stu:2005:CLAQG: P. Slaný and Z. Stuchlík. Relativistic thick discs in the Kerr–de Sitter backgrounds. Classical Quantum Gravity, 22(17):3623–3651, 2005. Sla-Stu:2008:CLAQG:CmtNoMonKadS P. Slaný and Z. Stuchlík. Comment on 'non-monotonic orbital velocity profiles around rapidly rotating kerr–(anti-)de sitter black holes'. Classical and Quantum Gravity, 25(3):038001, 2008. Spe-etal:2007:ASTJS:3yrWMAP D. N. Spergel, R. Bean, O. Dore, M. R. Nolta, C. L. Bennett, J. Dunkley, G. Hinshaw, N. Jarosik, E. Komatsu, L. Page, H. V. Peiris, L. Verde, M. Halpern, R. S. Hill, A. Kogut, M. Limon, S. S. Meyer, N. Odegard, G. S. Tucker, J. L. Weiland, E. Wollack, and E. L. Wright. Three-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: Implications for cosmology. Astrophys. J. Suppl., 170(2):377–408, June 2007. Spe-etal:2007:ApJSuppl: D. N. Spergel, R. Bean, O. Dore, M. R. Nolta, C. L. Bennett, J. Dunkley, G. Hinshaw, N. Jarosik, E. Komatsu, L. Page, H. V. Peiris, L. Verde, M. Halpern, R. S. Hill, A. Kogut, M. Limon, S. S. Meyer, N. Odegard, G. S. Tucker, J. L. Weiland, E. Wollack, and E. L. Wright. Three‐Year Wilkinson Microwave Anisotropy Probe ( WMAP ) Observations: Implications for Cosmology. Astrophys. J. Suppl., 170(2):377–408, jun 2007. Stu:1980:BULAI: Z. Stuchlík. Equatorial circular orbits and the motion of the shell of dust in the field of a rotating naked singularity. Bull. Astronom. Inst. Czechoslovakia, 31:129–144, 1980. Stu:1981:BAIC:Rad.mot.ph.Kerr Z. Stuchlík. The radial motion of photons in Kerr metric. Bulletin of the Astronomical Institutes of Czechoslovakia, 32:40–52, 1981. Stu:1983:BULAI: Z. Stuchlík. The motion of test particles in black-hole backgrounds with non-zero cosmological constant. Bull. Astronom. Inst. Czechoslovakia, 34(3):129–149, 1983. Stu:1984:BULAI: Z. Stuchlík. An Einstein–Strauss–de Sitter model of the universe. Bull. Astronom. Inst. Czechoslovakia, 35(4):205–215, 1984. Stu:2000:ACTPS2: Z. Stuchlík. Spherically symmetric static configurations of uniform density in spacetimes with a non-zero cosmological constant. Acta Phys. Slovaca, 50(2):219–228, Mar. 2000. Stu:2005:MODPLA: Z. Stuchlík. Influence of the Relict Cosmological Constant on Accretion Discs. Modern Phys. Lett. A, 20(8):561–575, Mar. 2005. Stu-Ba_Ost:1998:KNdSrest.rep.bar.epm Z. Stuchlík, G. Bao, E. Østgaard, and S. Hledík. Kerr-Newman-de Sitter black holes with a restricted repulsive barrier of equatorial photon motion. Phys. Rev. D, 58(8):084003, Oct. 1998. Stu-Cal:1991:GENRG2: Z. Stuchlík and M. Calvani. Null geodesics in black-hole metrics with nonzero cosmological constant. Gen. Relativity Gravitation, 23(5):507–519, May 1991. Stu-Hle:1999:PHYSR4: Z. Stuchlík and S. Hledík. Some properties of the Schwarz­schild–de Sitter and Schwarz­schild–anti-de Sitter spacetimes. Phys. Rev. D, 60(4):044006 (15 pages), Aug. 1999. Stu-Hle:2000:CLAQG: Z. Stuchlík and S. Hledík. Equatorial photon motion in the Kerr–Newman spacetimes with a non-zero cosmological constant. Classical Quantum Gravity, 17(21):4541–4576, Nov. 2000. Stu-Hle:2002:ACTPS2: Z. Stuchlík and S. Hledík. Properties of the Reissner–Nordström spacetimes with a nonzero cosmological constant. Acta Phys. Slovaca, 52(5):363–407, Oct. 2002. Stu-Hle-Nov:2016:PHYSR4: Z. Stuchlík, S. Hledík, and J. Novotný. General relativistic polytropes with a repulsive cosmological constant. Phys. Rev. D, 94:103513, Nov 2016. Stu-Hle-Tru:2011:CLAQG: Z. Stuchlík, S. Hledík, and K. Truparová. Evolution of Kerr superspinars due to accretion counterrotating thin discs. Classical Quantum Gravity, 28(15):155017, Aug. 2011. Stu-Kol:2012:PHYSR4:AccStringLoops Z. Stuchlík and M. Kološ. Acceleration of string loops in the Schwarzschild–de Sitter geometry. Phys. Rev. D, 85(6):065022 [13 pages], 2012. Stu-Kol:2012:JCAP:StringLoops: Z. Stuchlík and M. Kološ. String loops in the field of braneworld spherically symmetric black holes and naked singularities. Journal of Cosmology and Astroparticle Physics, 2012:008, 2012. Stu-Kov:2008:INTJMD:PsNewtSdS: Z. Stuchlík and J. Kovář. Pseudo-Newtonian gravitational potential for Schwarzschild–de Sitter spacetimes. INTJMD, 17(11):2089–2105, 2008. Stu-Sch:2010:CLAQG:AppKepDiOrKerrSSp Z. Stuchlík and J. Schee. Appearance of Keplerian discs orbiting Kerr superspinars. Classical Quantum Gravity, 27(21):215017 (39 pages), Nov. 2010. Stu-Sch:2011:JCAP:CCMagOnCloud: Z. Stuchlík and J. Schee. Influence of the cosmological constant on the motion of Magellanic Clouds in the gravitational field of Milky Way. Journal of Cosmology and Astroparticle Physics, 9:018–018, Sept. 2011. Stu-Sch:2012:INTJMD:GRvsPsNewtMagClou: Z. Stuchlík and J. Schee. Comparison of general relativistic and pseudo-Newtonian description of Magellanic-clouds motion in the field of Milky Way. Internat. J. Modern Phys. D, 21(4):1250031, Apr. 2012. 0264-9381-29-6-065002 Z. Stuchlík and J. Schee. Observational phenomena related to primordial Kerr superspinars. Classical and Quantum Gravity, 29(6):065002, 2012. Stu-Sch:2013:CLAQG:UHEKerrGeo Z. Stuchlík and J. Schee. Ultra-high-energy collisions in the superspinning Kerr geometry. Classical Quantum Gravity, 30(7):075012, Apr. 2013. Stu-etal:2017:JCAP: Z. Stuchlík, J. Schee, B. Toshmatov, J. Hladík, and J. Novotný. Gravitational instability of polytropic spheres containing region of trapped null geodesics: a possible explanation of central supermassive black holes in galactic halos. Journal of Cosmology and Astroparticle Physics, 2017(06):056, 2017. Stu-Sla:2004:PHYSR4: Z. Stuchlík and P. Slaný. Equatorial circular orbits in the Kerr–de Sitter spacetimes. Phys. Rev. D, 69:064001, 2004. Stu-Sla-Hle:2000:ASTRA: Z. Stuchlík, P. Slaný, and S. Hledík. Equilibrium configurations of perfect fluid orbiting Schwarz­schild–de Sitter black holes. Astronomy and Astrophysics, 363(2):425–439, Nov. 2000. Stu-Sla-Kov:2009:CLAQG:PseNewSdS Z. Stuchlík, P. Slaný, and J. Kovář. Pseudo-Newtonian and general relativistic barotropic tori in Schwarzschild–de Sitter spacetimes. Classical Quantum Gravity, 26(21):215013 (34 pp), Nov. 2009. Stu-etal:2005:PHYSR4:AschenUnexpTopo: Z. Stuchlík, P. Slaný, G. Török, and M. A. Abramowicz. Aschenbach effect: Unexpected topology changes in the motion of particles and fluids orbiting rapidly rotating Kerr black holes. Phys. Rev. D, 71(2):024037, Jan. 2005. Teo:2003:GenRelGrav: E. Teo. Spherical Photon Orbits Around a Kerr Black Hole. General Relativity and Gravitation, 35:1909–1926, Nov. 2003. Vil-etal:2013:ASTSS1:PhMoChgAdS: J. R. Villanueva, J. Saavedra, M. Olivares, and N. Cruz. Photons motion in charged Anti-de Sitter black holes. Astrophys. and Space Sci., 344(2):437–446, Dec. 2012. Wan-etal:2000:ASTRJ2: L. Wang, R. R. Caldwell, J. P. Ostriker, and P. J. Steinhardt. Cosmic concordance and quintessence. Astrophys. J., 530(1):17–35, 2000. Wan-Che:2012:PHYLB:CirLoopPerTens L. Wang and H. Cheng. The evolution of circular loops of a cosmic string with periodic tension. Phys. Lett. B, 713(1):59–62, 2012.
http://arxiv.org/abs/1702.07850v3
{ "authors": [ "Daniel Charbulak", "Zdenek Stuchlik" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170225080302", "title": "Photon motion in Kerr-de Sitter spacetimes" }
hypHypothesis propProposition conjectureConjecture
http://arxiv.org/abs/1702.08396v2
{ "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227174334", "title": "Learning Hierarchical Features from Generative Models" }
Learning Vector Autoregressive Models with Latent Processes Saber Salehkaleybar^*, Jalal Etesami^*†, Negar Kiyavash^†, Kun Zhang^^*Coordinated Science Laboratory, University of Illinois at Urbana-Champaign, Urbana, USA.^†Department of ISE, University of Illinois at Urbana-Champaign Urbana, USA.^Department of ECE, University of Illinois at Urbana-Champaign, Urbana, USA.^Department of Philosophy, Carnegie Mellon University, Pittsburgh, USA. December 30, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================== We study the problem of learning the support of transition matrix between random processes in a Vector Autoregressive (VAR) model from samples when a subset of the processes are latent. It is well known that ignoring the effect of the latent processes may lead to very different estimates of the influences among observed processes, and we are concerned with identifying the influences among the observed processes, those between the latent ones, and those from the latent to the observed ones. We show that the support of transition matrix among the observed processes and lengths of all latent paths between any two observed processes can be identified successfully under some conditions on the VAR model. From the lengths of latent paths, we reconstruct the latent subgraph (representing the influences among the latent processes) with a minimum number of variables uniquely if its topology is a directed tree. Furthermore, we propose an algorithm that finds all possible minimal latent graphs under some conditions on the lengths of latent paths. Our results apply to both non-Gaussian and Gaussian cases, and experimental results on various synthetic and real-world datasets validate our theoretical results.§ INTRODUCTIONIdentifying causal influences among time series is a problem of interest in many fields. In macroeconomics, for instance, researchers seek to understand what factors contribute to economic fluctuations and how they interact with each other <cit.>. In neuroscience, many researchers focus on learning the interactions between different regions of brain by analyzing neural spike trains <cit.>. Granger causality <cit.>, transfer entropy <cit.>, and directed information <cit.> are some of the most commonly used measures in the literature to calculate time-delayed dependence structures in time series.Measuring the reduction of uncertainty in one variable after observing another variable is the key concept behind such measures. Under certain assumptions, these measures may represent causal relations among the variables <cit.>. In <cit.>, an overview of various definitions of causation is given for time series.In this work, we study the causal identification problem in VAR models when only a subset of times series is observed. More precisely, we assume that the available measurements are a set of random processes X⃗(t)∈ℝ^n which, together with another set of latent random processes Z⃗(t)∈ℝ^m, where m≤ n form a first order VAR model as follows: [ X⃗(t+1); Z⃗(t+1) ] =[ A_11 A_12; A_21 A_22 ][ X⃗(t); Z⃗(t) ] + [ ω⃗_X(t+1); ω⃗_Z(t+1) ]. Here we assume that observed data were measured at the right causal frequency of the VAR process; otherwise one may need to consider the effect of the sampling procedure such as subsampling or temporal aggregation <cit.>. Under certain assumptions (e.g., causal sufficiency), the support of the transition matrix corresponds to the causal structure between these processes <cit.>.If we ignore the influence of latent processes and just regress X⃗(t+1) on X⃗(t), we may get a wrong estimate of the transition matrix between observed processes (see the example in <cit.>). Hence, it is crucial to consider the presence of latent processes and their influences on the observed processes. Contributions: The contributions of this paper are as follows: we propose a learning approach that recovers the observed sub-network (support of A_11) by regressing the observed vector X⃗(t+1) on a set of its past observations (not just X⃗(t)) as long as the graph representation of latent sub-network (support of A_22) is a directed acyclic graph (DAG). We also derive a set of sufficient conditions under which we can uniquely recover the influences from latent to observed processes, (support of A_12) and also the influences among the latent variables, (support of A_22).Additionally, we propose a sufficient condition under which the support of the complete transition matrix can be recovered uniquely. More specifically, we show that under an assumption on the observed to latent noise power ratio, if neither of the sub-matrices A_12 and A_21 are zero, it is possible to determine the length of all directed latent paths[A directed path is a latent path if it connects two observed variables and all the intermediate variables on that path are latent.].We refer to this information as linear measurements[This is because it can be inferred from the observational data using linear regression.]. This information reveals important properties of the causal structure among the latent and observed processes, i.e., support of [0, A_12; A_21, A_22].We call this sub-network of a VAR model unobserved network. We show that in the case that the unobserved network is a directed tree and each latent variable has at least two parents and two children, a straightforward application of <cit.> can recover the unobserved network uniquely.Furthermore, we propose Algorithm <ref> that recovers the support of A_22 and A_12 given the linear measurements when only the latent sub-network is a directed tree plus some extra structural assumptions (see Assumption <ref>). Lastly, we study the causal structures of VAR models in a more general case in which there exists at most one directed latent path of length k≥2 between any two observed processes (see Assumption <ref>). For such VAR models, we propose Algorithm <ref> that can recover all possible unobserved networks with minimum number of latent processes. Our results apply to both non-Gaussian and Gaussian cases, and experimental results on various synthetic and real-world datasets validate our theoretical results. All proofs can be found in supplemental material.Related works:The problem of recovering latent causal structure for time series has been studied in the literature. Assuming that connections between observed variables are sparse and each latent variable interacts with many observed variables, it has been shown that the transition matrix between observed variables can be identified in a VAR model <cit.>. However, their approach focuses on learning only the observed sub-network. <cit.> applied a method based on expectation maximization (EM) to infer properties of partially observed Markov processes, without providing theoretical analysis for identifiability.<cit.> showed that if the exogenous noises are independent non-Gaussian and additional so-called genericity assumptions hold, then the sub-networks A_11 and a part of A_12 are uniquely identifiable. However, these assumptions may not hold true in a real-world dataset even with three variables <cit.>. They also presented a result in which they allowed Gaussian noises in their VAR model and obtained a set of conditions under which they can recoverup to 2nn candidate matrices for A_11. Their learning approach is also based on EM and approximately maximizes the likelihood of a parametric VAR model with a mixture of Gaussians as noise distribution. Recently, <cit.> studied a network of processes (not necessary a VAR model) whose underlying structure is a polytree and introduced an algorithm that can learn the entire casual structure (observed and unobserved networks) using a particular discrepancy measure. Compared to related works, we improve the state of the art for latent recovery by showing the identifiability of a much larger class of structures. Unlike <cit.>, we do not assume the non-Gaussian distribution of the exogenous noises or those genericity assumptions. Moreover, our results do not rely on the assumption that connections between observed variables are sparse or each latent variables interacts with many observed variables as in <cit.>. Furthermore, these works <cit.> can uniquely identify at most a part of transition matrix (A_11 or a part of A_12).§ PROBLEM DEFINITION In this part, we review some basic definitions and our notation.Throughout this paper, we use an arrow over the letters to denote vectors. We assume that the time series are stationary and denote the autocorrelation of X⃗ by γ_X(k):=𝔼[X⃗(t)X⃗(t-k)^T].We denote the support of a matrix A by Supp(A) and use Supp(A)⊆ Supp(B) to indicate [A]_ij=0 whenever [B]_ij=0. We also denote the Fourier transform of g by ℱ(g) and it is given by ∑_h=-∞^∞ g(h) e^-hΩ j.In a directed graph G=(V,E) with the node set V and the edge set E, we denote the set of parents of a node v by 𝒫_v:={u: (u,v)∈E} and the set of its children by 𝒞_v:={u: (v,u)∈E}.The skeleton of a directed graph G is the undirected graph obtained by removing all the directions in G.§.§ System Model Consider the VAR model in (<ref>). Let ω⃗_X(t)∈ℝ^n and ω⃗_Z(t)∈ℝ^m be i.i.d random vectors with mean zero. For simplicity, we denote the matrix [A_11,A_12;A_21,A_22] by A.Our goal is to recover Supp(A) from observational data, i.e., {X⃗(t)}. Rewrite <ref> as followsX⃗(t+1)= ∑_k=0^t A^*_kX⃗(t-k) + A_12A_22^t Z⃗(0)+∑_k=0^t-1Ã_kω⃗_Z(t-k) +ω⃗_X(t+1),where A_0^*:=A_11, A_k^*:=A_12A_22^k-1A_21 for k≥ 1, and Ã_k:= A_12A_22^k.We assume that the A_22 is acyclic, i.e., ∃ 0<l≤ m, such that A_22^l=0.Based on the above assumption, for t≥ l, Equation (<ref>) becomes[Note that the limits of summations in (<ref>) are changed.]X⃗(t+1)=∑_k=0^l A^*_kX⃗(t-k) +∑_k=0^l-1Ã_kω⃗_Z(t-k)+ω⃗_X(t+1).We are interested in recovering the set {Supp(A_k^*)}_k=0^l because it captures important information about the structure of the VAR model.Specifically, Supp(A^*_0)=Supp(A_11); so it represents the direct causal influences between the observed variables and Supp(A^*_k) for k≥1 determines whether at least one directed path of length k+1 exists between any two observed nodes which goes through the latent sub-network.[Herein, we exclude degenerate cases where there is a direct path from an observed node to another one with length k but the corresponding entry in matrix Supp(A^*_k) is zero. In fact, such special cases can be resolved by small perturbation of nonzero entries in matrix A. In the causal discovery literature, this assumption is known as faithfulness <cit.>.]We will make use of this information in our recovery algorithm. We call the set of matrices {Supp(A_k^*)}_k≥0, linear measurements. In Section 4, we present a set of sufficient conditions under which given the linear measurements, we can recover the entire or most parts of the unobserved network uniquely.<ref> Note that in general, the linear measurements cannot uniquely specify the unobserved network.For example, Figure <ref> illustrates two different unobserved networks that both share the same set of linear measurements, A^*_k=0 for k>2 and the only nonzero entries of A^*_1 and A^*_2 are {(3,2)} and {(4,1),(4,2)}, respectively. § IDENTIFIABILITY OF THE LINEAR MEASUREMENTSAs we need the linear measurements for our structure learning, in this section, we study a sufficient condition under which we can recover the linear measurements from the observed processes {X⃗(t)}.To do so, we start off by rewriting Equation (<ref>) as follows,X⃗(t+1)=𝒜𝒳⃗_t-l:t+∑_k=0^l-1Ã_kω⃗_Z(t-k)+ω⃗_X(t+1), where 𝒜:= [A_0^*,...,A_l^*], and 𝒳⃗_t-l:t:=[X⃗(t);⋯;X⃗(t-l)]. By projecting Ã_kω⃗_Z(t-k) onto the vector space spanned by the observed processes, i.e., {X⃗(t),...,X⃗(t-l)}, we obtainÃ_kω⃗_Z(t-k)=∑_r=0^l C_r^s X⃗(t-r) + N⃗_Z(t-k), 0≤ k≤ l-1,where {N⃗_Z(t-k)} denote the residual terms and {C_r^s} are the corresponding coefficient matrices. Substituting (<ref>) into (<ref>) implies X⃗(t+1)=ℬ𝒳⃗_t-l:t+θ⃗(t+1),where ℬ:=[B^*_0,...,B^*_l], B^*_k:=A_k^*+∑_s=0^l-1 C_k^s, andθ⃗(t+1):=ω⃗_X(t+1)+∑_k=0^l-1N⃗_Z(t- k).Note that by this representation, θ⃗(t+1) is orthogonal to 𝒳⃗_t-l:t.Hence, Equation (<ref>) shows that the minimum mean square error (MMSE) estimatorcan learn the coeffiecient matrix ℬ given the observed processes. More precisely, let Γ_X(l):= 𝔼{𝒳⃗_t-l:t𝒳⃗_t-l:t^T}, then we have ℬ=[γ_X(1),..,γ_X(l+1)]×Γ_X(l)^-1. Under Assumption <ref>, for the stationary VAR model in (<ref>), we have||B^*_k-A_k^*||_1≤√(n(l-k-1) M/L)||A_12||_2||A_22||_2^k+1,whereM:=λ_max(Γ_ω_Z(0)) and L:=λ_min(Γ_X(0)).This result implies that we can asymptotically recover the support of{A_k^*}_k=0^l as long as the absolute values of non-zero entries of A_k^* are bounded away from zero by 2√(n(l-k-1)M/L)||A_12||_2||A_22||_2^k+1. Please note that A_11=A_0^*=B_0^* if ||A_12||_2=0. In Appendix (the second section), we explained how these bounds can be estimated from observational data. Let Σ_X=σ^2_X I_n× n and Σ_Z=σ_Z^2 I_m× m be the autocovariance matrices of ω⃗_X(t) and ω⃗_Z(t), respectively. Then, the ratio M/L strictly increases by decreasing σ_X^2/σ_Z^2. Proposition <ref> implies that when the σ_X^2/σ_Z^2 increases, M/L will decrease, and based on the bound in Proposition <ref>, the estimation error will decrease (it goes to zero as σ_X^2/σ_Z^2 tends to infinity). This shows that recovering the linear measurements is much easier in high σ_X^2/σ_Z^2 regime as illustrated in Figure <ref>. Note that Proposition <ref> stresses a suffiecient condition for recovering the linear measurements. As shown in Figure <ref>, in practice, the actual estimation error is much smaller than the bound in Proposition <ref>. In the next section, we will make use of {Supp(A_k^*)}_k>0 to recover the unobserved network. We assume that the correct linear measurements can be obtained from matrix ℬ. In order to estimate the support of matrix ℬ from a finite number of samples drawn from the observed processes, say {X⃗(t)}_t=1^T, first we obtain the lag length l in (<ref>) by AIC or FPE criterion (see Chapter 4 in <cit.>). Afterwards, we can estimate the coefficient matrix ℬ, using an empirical estimator for Γ_X(l), {γ_X(h)}_h=1^l+1, and then applying (<ref>). Denote the result of this estimation by ℬ_T. It can be shown that <cit.>, √(T)vec(ℬ_T-ℬ)𝒩(0,Γ^-1_X(l)⊗Σ),wheredenotes convergence in distribution, and Σ is the autocovariance matrix of θ⃗(t). vec(.) transforms a matrix to a vector by stacking its columns and ⊗ is the Kronecker product.Having the estimates of Γ_X(l) and Σ, we can test whether the entries of matrix ℬ are greater than the bounds in Proposition <ref>(see Chapter 3 in <cit.>). § LEARNING THE UNOBSERVED NETWORKRecall that we refer to Supp([0, A_12; A_21, A_22]) as the unobserved network and Supp(A_22) as the latent sub-network.We present three algorithms that take the linear measurements {Supp(A^*_k)}_k≥ 0 as their input. The first algorithm recovers the entire unobserved network uniquely as long as it is a directed tree andeach latent node has at least two parents and two children. The output of the second algorithm is Supp([0, A_12; A_21, A_22]), where Supp(A_21)⊆ Supp(A_21).This is guaranteed whenever the latent sub-network is a directed tree and some extra conditions are satisfied on how the latent and observed nodes are connected. The third algorithm finds the set of all possible networks with minimum number of latent nodes that are consistent with the measurements. This algorithm is able to do so when there exists at most one directed latent path of any arbitrarily length between two observed nodes. A directed path is latent if all the intermediate variables on that path are latent. §.§ Unobserved Network is a Directed TreeAuthors in <cit.> introduced a necessary and sufficient condition for recovering a weighted directed tree uniquely from a valid distance matrix D defined on the observed nodes,[The skeleton of the recovered tree is the same as the original one but not necessary the weights.]and also proposed a recovery algorithm.The condition is as follows: every latent node must have at least two parents and two children.A matrix D, in <cit.>, is a valid distance matrix, when [D]_ij equals the sum of all the weights of those edges that belong to the directed path from i to j, and [D]_ij=0, if there is no directed path.The algorithm in <cit.> has two phases. In the first phase, it creates a directed graph among the observed nodes with the adjacency matrix Supp(D).In the second phase, it recursively finds and removes the circuits by introducing latent nodes for each circuit.[In a directed graph, a circuit is a cycle after removing all the directions.]For more details, see <cit.>. In order to adopt <cit.>'s algorithm for learning the unobserved network, we introduce a valid distance matrix using our linear measurements as follows, D_ij=k+1 if[Supp(A_k^*)]_ji≠ 0 and 0, otherwise. Recall that [Supp(A_k^*)]_ji indicates whether there exists a directed latent path from i to j of length k+1 in the unobserved network. From theorem 8 in <cit.>, it is easy to show that the unobserved network can be recovered uniquely from above distance matrix if its topology is a directed tree and every latent node has at least two parents and two children.§.§ Latent Sub-network Is a Directed TreeWe denote the subset of observed nodes that are parents of a latent node h by 𝒫^O_h and denote the subset of observed nodes for which h is a parent, by 𝒞^O_h. We further denote the set of all leaves in the latent sub-network by ℒ. We consider learning an unobserved network G that satisfies the following assumptions. Assume that the latent sub-network of G is a directed tree. Furthermore, for any latent node h in G, (i) 𝒫^O_h⊈∪_h≠ j𝒫^O_j and, (ii) if h is a leaf of the latent sub-network, then 𝒞^O_h⊈∪_ i∈ℒ,i≠ h𝒞^O_i.This assumption states that the latent sub-network of G must be a directed tree such that each latent node in G has at least one unique parent in the set of observed nodes. That is, a parent who is not shared with any other latent node. Furthermore, each latent leaf has at least one unique child among the observed nodes. For instance, when Supp(A_22) represents a directed tree and both Supp(A_12) and Supp(A_21) contain identity matrices, Assumption <ref> holds. As we will see later in Experimental Results (Figure <ref>), a large portion of randomly generated graphs satisfy Assumption <ref>.Figure <ref> illustrates a simple network that satisfies Assumption <ref> in which the unique parents of latent nodes a, b,c, and d are {1}, {3}, {2}, and {4}, respectively. The unique children of latent leaves c and d are {5} and {2,4}, respectively. Among all unobserved networks that are consistent with the linear measurements induced from (<ref>), any graph G that satisfies Assumption <ref> has the minimum number of latent nodes.Note that if Assumption <ref> is violated, one can find many unobserved networks that are consistent with the linear measurements but are not minimum (in terms of the number of latent nodes). For example, the network in Figure <ref> satisfies Assumption <ref> (ii) but not (i).Figure <ref> depicts an alternative network with the same linear measurements as the network in Figure <ref> but it has fewer number of latent nodes. Similarly, the graph in Figure <ref> satisfies Assumption <ref> (i) but not (ii). Figure <ref> shows an alternative graph with one less latent node.Consider an unobserved network G with adjacency matrix Supp([0, A_12; A_21, A_22]).If G satisfies Assumption <ref>, then its corresponding linear measurements uniquely identify G upto Supp([0, A_12; A_21, A_22]), where Supp(A_21)⊆ Supp(A_21). Figure <ref> gives an example of a network satisfying Assumption <ref> and an alternative network, Figure <ref>, with the same linear measurements which departs from the Figure <ref> in the A_21 component. Next, we propose the directed tree recovery (DTR) algorithm that takes the linear measurements of an unobserved network G satisfying Assumption <ref> and recovers G upto the limitation in Theorem <ref>. This algorithm consists of three main loops.Recall that Assumption <ref> implies that each latent node has at least one unique observed parent. The first loop finds all the unique observed parents for each latent node (lines: 4-11).The second loop reconstructs Supp(A_22) and Supp(A_12) (lines: 12-17). And finally, the third loop constructs Supp(A_21) such that Supp(A_21)⊆ Supp(A_21) (lines: 18-22).The following lemma shows that the first loop of Algorithm <ref> can find all the unique observed parents from each latent node. To present the lemma, we need the following definitions. For an observed node i, we define l_i:=max{k:[A^*_k-1]_si≠0,for some s}, R_i:={j:[A^*_l_i-1]_ji≠0},M_i:={(j,r) : [A^*_r-1]_ji≠0}.In the above equations, l_i denotes the length of longest directed latent path that connects node i to any observed node. R_i is the set of all observed nodes that can be reached by i with a directed latent path of length l_i and set M_i consists of all pairs (j,r) such that there exists a directed latent path from i to j with length r. Under Assumption <ref>, an observed node i is the unique parent of a latent node if and only if for any other observed node j s.t. l_i=l_j, we have(R_j⊈R_i) ∨ (R_j=R_i ∧ M_i⊆ M_j).In the first loop, if there exist multiple unique parents of a latent node (for instance, node 2 and node 3 in Figure <ref>), we pick the one with a minimum index (lines: 7-9).The second loop recovers Supp(A_22) based on the following observation. If a latent node h_k is the parent of latent node h_s, then h_k can reach all the observed nodes in R_s, i.e., R_s⊆ R_kand l_k=l_s+1 (line: 13).Furthermore, Supp(A_12) can be recovered using the fact that an observed node j is a children of a latent node h_s, if a unique parent of h_s, e.g., s, can reach j by a directed latent path of length 2 (line: 16).Finally, the third loop reconstructs Supp(A_21) by adding an observed node i to the parent set of latent node h_j, if i can reach all the observed nodes that a unique parent of h_j, e.g., j, reaches (lines: 18-22).Suppose network G satisfies Assumption <ref>. Then given its corresponding linear measurements, Algorithm <ref> recovers G upto the limitation in Theorem <ref>. §.§Learning More General Unobserved Networks with Minimum Number of Latent NodesIn general, the latent sub-network may not be a tree or there may not be a unique minimal unobserved network consistent with the linear measurements (see Figure <ref>).Hence, we try to find an efficient approach to recovering all possible minimal unobserved networks under some conditions. In fact, without any extra conditions, finding a minimal unobserved network is NP-hard. Finding an unobserved network that is both consistent with a given linear measurements and has a minimum number of latent nodes is NP-hard.Below, after some definitions, we propose the Node-Merging (NM) algorithm that returns all possible unobserved networks with minimum number of latent nodes under the following assumption.Assume that there exists at most one directed latent path of each length between any two observed nodes.For example, the graph in Figure <ref> satisfies this assumption but not the one in Figure <ref>. This is because there are two directed latent paths of length 2 from node 5 to node 4. (Merging) We define merging two nodes i' and j' in graph G as follows: remove node j^' and the edges between i' and j', and then give all the parents and children of j'to i^'.We denote the resulting graph after merging i' and j' by Merge(G,i',j'). We say that two nodes i^' and j^' are mergeable if Merge(G,i',j') is consistent with the linear measurements of G. (Connectedness) Consider an undirected graph G̅ over the observed nodes which is constructed as follows: there is an edge between two nodes i and j in G̅, if there exists k≥ 1 s.t. Supp([A_k^*]_ij)=1 or Supp([A_k^*]_ji)=1; We say that two observed nodes i and j are “connected" if there exist a path between them in G̅. It can be seen that if pairs i,j and j,k are connected then node i,k are also connected.We then define a connected class as a subset of observed nodes in which any two nodesare connected. Initialization: We first find the set of all connected classes, say S_1,S_2,...,S_C.For each class S_c, we create a directed graph G_0,c that is consistent with the linear measurements.To do so, for any two observed nodes i,j∈ S_c, if [A_r^*]_ji≠ 0, we construct a directed path with length r+1 from node i to node j by adding r new latent nodes to G_0,c.Merger: In this phase, for any G_0,c from the initialization phase, we merge its latent nodes iteratively until no further latent pairs can be merged.Since the order of mergers leads to different networks with minimum number of latent nodes, the output of this phase will be the set of all such networks. Algorithm <ref> summarizes the steps of NM algorithm. In this algorithm, subroutine Check(G,i',j') checks whether two nodes i' and j' are mergeable. Under Assumptions <ref> and <ref>, the NM algorithm returns the set of all networks that are consistent with the linear measurements and have minimum number of latent nodes. § EXPERIMENTAL RESULTSSynthetic Data: We considered a directed random graph, denoted by DRG(p,q), such that there exists a directed link between an observed and latent node with probability p,independently across all pairs, and there is a directed link between two latent nodes with probability q. If there is a link between two nodes, we set the weight of that link uniformly from [-a,a].We utilize the method described in Section 3 to estimate linear measurements with a significance level of 0.05. In order to evaluate how well we can estimate the linear measurements, we generated 1000 instances of DRG(0.4,0.4) with n+m=100, Σ_X=0.1I_n× n,Σ_Z=0.1I_m × m, and a=0.1.The length of the time series was set to T=1000. Let Supp(Â_11) be the estimate of support of A_11. In Figure <ref>, the expected estimation error, i.e. || Supp(Â_11)-Supp(A_11)||^2_F/n^2, is computed, where ||.||_F is the Frobenius norm. One can see that the estimation error decreases as the number of observed variables increases.We also studied the effect of the observed to latent noise power ratio (OLNR), σ_X^2/σ_Z^2, on ||B_0^*-A_0^*||_1, and compared it with the bound given in Proposition <ref>. We generated 1000 instances of DRG(0.05,0.05) with n=5, m=5, and a=0.1. As it can be seen in Figure <ref>, the average estimation error decreases as OLNR increases, as expected from Proposition <ref>.We investigated what percentage of instances of the random graphs satisfy Assumption <ref>.We generated 1000 instances of DRG(p,1/n) with n=100, and p∈[0.04,0.2].In Figure <ref>, the probability of satisfying Assumption <ref>, P_sat., is depicted versusp for different numbers of latent variables in the VAR model. For larger m, it is less likely to see a unique observed parent for each latent node and thus P_sat. decreases.For a fixed m, the same phenomenon will occur if we increase p when p is relatively large. Furthermore, for small p, there might exist some latent nodes that have no observed parent or no observed children.We also evaluated the performance of the NM algorithm in random graphs.We generated 1000 instances of DRG(1/2n,1/2n) with n=10,...,100 and m=n/2, and computed the linear measurements.To save time, if for a class of connected nodes the number of latent nodes generated in the initial phase exceeds 40,we supposed that the corresponding instance cannot be recovered efficiently in time and did not proceed to the merging phase. Figures <ref> and <ref> depict the percentage of instances in which the algorithm can recover all possible minimal unobserved networks and the average run time (in seconds) of the algorithm, respectively.[We performed the experiment on a Mac with 2× 2.4 GHz 6-Core Intel Xeon processor and 32 GB of RAM.]This plot shows that we can recover all possible minimal unobserved networks for a large portion of instances efficiently even in relatively large networks. US Macroeconomic Data: We considered the following set of time series from the quarterly US macroeconomic data for the period from 31-Mar-1947 to 31-Mar-2009 collected from the St. Louis Federal Reserve Economic Database (FRED) <cit.>: GDP, GDPDEF, COE, HOANBS, TB3MS, PCEC, GPDI.Assuming that the underlying dynamics is linear (Eq. (<ref>)), we considered the estimated VAR model over all variables as the ground truth. Then, we selected four arbitrary times series as observed processes and computed Supp(Â_11).We divided the 74=35 possible selections into two classes: 1) high power, where tr(𝔼{ω_X(t)ω_X(t)^T})>τ for a fixed threshold τ;2) low power:where tr(𝔼{ω_X(t)ω_X(t)^T})<τ. In this experiment, we set τ=0.02.In Figure <ref>, we plotted the histograms of || Supp(Â_11)-Supp(A_11)||^2_F for these two classes.As it can be seen, in the high power regime, most of the possible selections have small estimation errors.We also considered the following six time series of US macroeconomic data during 1-Jun-2009 to 31-Dec-2016 from the same database: GDP, GPDI, PCEC, TBSMS, FEDFUND, and GS10.We obtained the causal structure among these six time series by fitting a VAR model on all of them and considered the result as our ground truth (see Figure <ref>).Then, we removed GPDI from the dataset and considered the remaining five time series as observe processes and checked whether the influences from the “latent" process (GPDI) can be corrected estimated.r5cm !<0cm,0cm>;<1.3cm,0cm>:<0cm,1.2cm>::!(-.9,.4) *+GDP="gdp"!(0,-.2) *+GPDI∘="gpdi" !(2,.4) *+PCEC="pcec" !(1,-.2) *+GS10="gsio"!(2,-.7) *+TBSMS="tbsms" !(-.9,-.7) *+FED="fed""gdp":@/^/"gsio" "gdp":"fed" "gdp":@/_.9cm/"tbsms" "gdp":"gpdi" "pcec":@/_/"gpdi" "pcec":"gsio" "pcec":"tbsms" "gpdi":"fed" "gpdi":"tbsms" "pcec":@/_-.7cm/"fed" US macroeconomic data. We estimated the linear measurements and gave them as an inputto Algorithm <ref>, which successfully recovered the ground truth (the estimated structure, in which the latent process is denoted by a circle, is identical to that in Figure <ref>).Dairy Prices: A collection of three US dairy prices has been observed monthly from January 1986 to December 2016 <cit.>: milk, butter, and cheese prices.r4.7cm !<0cm,0cm>;<1.2cm,0cm>:<0cm,.8cm>::!(-.5,-.6) *+Milk ="x1"!(1,.3) *+Butter ="z" !(2.3,-.6) *+Cheese="x2" "x1":"x2" "x1":"z""z":"x2""x2":@/_-.4cm/"x1" Dairy prices We estimated the VAR model on all the time series with lag length l=1and considered the resulting graph as our ground truth (see Figure <ref>).Next, we omittedthe butter prices from the dataset and consideredthe milk and cheese prices as observed processes.The estimated linear measurements were: Supp(A_0^*)=Supp(A_11)=[1,1;1,0] and Supp(A_1^*)=[0,0;1,0].Algorithm <ref> correctly recoveredthe true causal graph using this linear measurements. Note that the genericity assumptions in <cit.> do not hold true for this data set (see Experiments section).West German Macroeconomic Data:We considered the quarterly West German consumption expenditures X_1, fixed investment X_2, and disposable income X_3, during 1960-1982 <cit.>.r4.7cm !<0cm,0cm>;<1.2cm,0cm>:<0cm,.8cm>::!(-.5,-.6) *+Expend="x1"!(1,.5) *+Income ="z" !(2.3,-.6) *+Invest="x2" "x1":"x2" "x1":"z""z":"x2""z":@/_.4cm/"x1" West German macroeconomic data. Similar to the previous experiment with dairy prices, we first obtained the entire transition matrix among all the process. Figure <ref> depicts the resulting graph. Next, we considered X_3 to be latent and used {X_1,X_2} to estimate the linear measurements Supp(A_0^*)=Supp(A_11)=[0,0;1,1] and Supp(A_1^*)=[1,0;1,0]. Using this linear measurements, Algorithm <ref> recovered the true network in Figure <ref> correctly.§ CONCLUSION AND FUTURE WORK We considered the problem of estimating time-delayed influence structure from partially observed time series data.Our approach consisted of two parts: First, we studied sufficient conditions under which certain aspects of the influence structure of the underlying system are identifiable. Second, we proposed two algorithms that recover the influence structures satisfying the sufficient conditions given in the first part. The proposed algorithms can construct the observed sub-network (support of A_11), the causal influences from latent to observed processes (support of A_12), and also the causal influences among the latent variables (support of A_22), uniquely under a set of sufficient conditions. As a future direction, we plan to extend our results to the case that A_22 might have cycles. In the paper, we have seen examples showing that unique recovery is not possible if any conditions of Assumption <ref> are violated. These conditions are a good starting point for the case that we have cycles in A_22. aaai § PROOF OF PROPOSITION <REF>We project the vector A_r+1:l-1[ω⃗_Z(t-r-1);⋯;ω⃗_Z(t-l+1)] onto X⃗(t-r) as follows:A_r+1:l-1[ ω⃗_Z(t-r-1); ⋮; ω⃗_Z(t-l+1) ]=C_r X⃗(t-r) +[ N⃗_Z(t-r-1); ⋮; N⃗_Z(t-l+1) ], where A_r+1:l-1=diag(Ã_r+1,...,Ã_l-1), and C_r is a block matrix with C_r^s as its sth block for s=0,...,l-r-2. Please note that ω⃗_Z(t-r) is orthogonal to X⃗(t-k) for k≥ r.Since N⃗_Z and X⃗(t-r) are orthogonal, we can see||A_r+1:l-1Γ_ω_Z(l-r-2)A_r+1:l-1^T||_2≥ ||C_rΓ_X(0)C_r^T||_2. Using (<ref>) and the relationship between ℓ_2 and ℓ_1norms of a matrix, we obtainλ_max(Γ_ω_Z(0))||A _r+1:l-1||_2^2≥λ_min(Γ_X(0))||C_r||_1^2/(n(l-r-1)), where λ_min(·) and λ_max(·) denote the minimum and maximum eigenvalues of a given matrix, respectively. Please note that ω⃗_Z(t) is white noise and thus we have: λ_max(Γ_ω_Z(l-r-2))=λ_max(Γ_ω_Z(0)). Using the fact that A_r+1:l-1 is diagonal and ||A_22||_2<1, we obtain ||C_r||_1≤√(n(l-r-1)M/L)||A_12||_2max_r+1≤ k≤ l-1||A_22||^k_2≤√(n(l-r-1)M/L)||A_12||_2||A_22||_2^r+1. where M:=λ_max(Γ_ω_Z(0)) and L:=λ_min(Γ_X(0)). From (<ref>), we have B^*_r-A_r^*= ∑_s=0^l-r-2 C_r^s. This implies that||B^*_r-A_r^*||_1≤ ||C_r||_1. Combining this inequality and the bound in (<ref>) concludes the result.§ ESTIMATING THE BOUNDS IN PROPOSITION <REF>The bound √(n(l-r-1)M/L)||A_12||_2||A_22||_2^k+1 can be estimated as follows: * The lag length l in (<ref>) can be obtained from AIC or FPE criterion (see chapter 4 in <cit.>).* We can estimate L by observation vector X⃗(t). We also consider a bound σ^2_max,Z on the maximum variance of exogenous noises in latent part.* We assume a bound on ||A_12||_2≤ρ_12 and ||A_22||_2≤ρ_22<1.In summary, an upper bound would be: √(n(l-r-1)σ^2_max,Z/L)ρ_12ρ_22^k+1. Suppose that absolute values of nonzero entries of A_k^* are greater than a_min,k. We can recover the support of matrix A_k^* successfully if4n(l-r-1)ρ_12^2/a_min,k^2(ρ_22)^2(k+1)≤L/σ_max,Z^2. § PROOF OF PROPOSITION <REF>The spectral density of matrix γ_X(h) can be computed as follows:ℱ(γ_X)= σ_X^2 F_X(Ω) F_X(Ω)^H+ σ_Z^2 F_Z(Ω) F_Z(Ω)^Hwhere F_X(Ω)= [e^jΩI_n× n -A_11 -∑_k=0^l-1 A_k^*e^-kjΩ]^-1, F_Z(Ω)=F_X(Ω)(A_12∑_k=0^l-1 A_22^k × e^-kjΩ), and H denotes Hermitian of a matrix. Thus, we have:Γ_X(0)=1/2π∫_0^2 πℱ(γ_X) dΩ= σ_X^2 F_X^0+σ_Z^2 F_Z^0,where F_X^0=1/(2π)∫_0^2π F_X(Ω) F_X(Ω)^H dΩ and F_Z^0=1/(2π)∫_0^2π F_Z(Ω) F_Z(Ω)^HdΩ.We define the function ψ_σ_X/σ_Z(v):=v⃗^TΓ_X(0)v⃗/σ_Z^2= (σ_X^2/σ_Z^2)F_X^0+ F_Z^0 where v⃗ is a unit vector. Suppose that v⃗^* minimizes the function ψ_σ_X/σ_Z(.). By the definition of L and M, the ratio M/L is equal to 1/ψ_σ_X/σ_Z(v⃗^*). Now if we decrease σ_X/σ_Z to σ'_X/σ'_Z, then we have: ψ_σ'_X/σ'_Z(v⃗^*)<ψ_σ_X/σ_Z(v⃗^*). Moreover, for the optimal solution v⃗'^* of ψ_σ'_X/σ'_Z(.), we know that: ψ_σ'_X/σ'_Z(v⃗'^*)≤ψ_σ'_X/σ'_Z(v⃗^*). Thus, we can conclude that: 1/ψ_σ'_X/σ'_Z(v⃗'^*)>1/ψ_σ_X/σ_Z(v⃗^*). § PROOF OF THEOREM <REF>First, we show such G has a minimum number of latent nodes. We do this by means of contradiction.But first observe that since the latent subnetwork of G is a directed tree, we can assign a non-negative number l_h to latent node h that represents the length of longest directed path from h to its latent descendants. Clearly, all such descendants are leaves which we denote them by L̃_h. For instance, if the latent subnetwork of G is a→ b→ c, then l_a=2 and L̃_a={c}. Suppose that G contains m latent nodes {h_1,...,h_m} and there exists another network G_1 (not necessary with tree-structure induced latent subgraph), with m_1<m number of latent nodes that it is also consistent with the same linear measurements as G. Due to assumption (i), there is at least m distinct observed nodes that have out-going edges to the latent subnetwork. More precisely, each h_i has at least a unique observed node as its parent. We denote a unique observed parent of node h_i by o_i.Because m_1<m, there exists at least one observed node in O̅:={o_1,...,o_m} that has shared its latent children with some other latent nodes in G_1. Among all such observed nodes, let o_i^* to be the one whose corresponding latent node in G, (h_i^*), has maximum l_h_i^*.[If there are several such observed node, let o_i^* to be one of them.] Furthermore, let Ĩ_i^*⊂{1,...,m}∖{i^*} to be the index-set of those observed nodes that o_i^* has shared a latent child with them in G_1.By the choice of o_i^*, we know that l_h_j≤ l_h_i^* for all j∈Ĩ_i^* and if for some 1≤ k≤ m, l_h_k>l_h_i^*, then o_k has not shared its latent child in G_1 with any other observed nodes in O̅. Moreover, there should be at least a latent node h_j^* where j^*∈Ĩ_i^* such that l_h_j^*= l_h_i^*. Otherwise, G_1 will not be consistent with the linear measurements of G. Let Ĩ_**:={j: l_h_j= l_h_i^*}∩Ĩ_i^*. Because o_i^* shares its latent children with ∪_j∈Ĩ_**o_j in G_1 and both G and G_1 consistent with the same linear measurements,the following holds in graph G,𝒞^O_L̃_h_i^*(G)⊆∪_j∈Ĩ_**𝒞^O_L̃_h_j(G),where 𝒞^O_L̃_h_j(G) indicates the set of observed children of the set L̃_h_j.This indeed contradicts assumption (ii).§ PROOF OF THEOREM <REF>First, we require the following definition.For a network G with corresponding latent sub-network that is a tree, we define U_k(G):={h∈ G:l_h=k}. To prove the equivalency, suppose there exists another network G_2 such that its latent sub-network is a tree and has a minimum number of latent nodes. Let {h_1,...,h_m} to denote the latent nodes in G.Since G satisfies Assumption (i), for every latent node h_i there exists a unique observed node o_i such that o_i∈𝒫^O_h_i(G) and o_j∉𝒫^O_h_i(G) for all j≠ i.Since both G and G_2 are consistent with the same linear measurement, it is easy to observe that if h_i∈ U_k(G), then o_i must have at least a latent child in G_2, say h'_i, such that l_h_i=l_h'_i. Note that l_h_i is computed in G andl_h'_i in G_2. Moreover, we must have: 𝒞_L̃_h_i^O(G)=⋃_h'∈ H'(o_i)∩ U_l_h_i(G_2)𝒞^O_L̃_h'(G_2),where H'(o_i) denotes the set of latent nodes in G_2 that have o_i as their observed parent.In other words, observed nodes that can be reached by a directed path of length l_h_i+2 from o_i should be the same in both graph G and G_2. This results plus the fact that G satisfies Assumption (ii), imply:I) For every h_i∈ U_k(G), there exists a unique latent node h'_i∈ U_k(G_2), such that o_i∈𝒫^O_h'_i(G_2) and o_j∉𝒫^O_h'_i(G_2) for all j≠ i, and𝒞^O_L̃_h(G)=𝒞_L̃_h'_i^O(G_2).Using I) and knowing that both G and G_2 have the same number of latent nodes, we obtain:II) |U_k(G)|=|U_k(G_2)|, for all k. Using I) and II), we can define a bijection ϕ between the latent subnetworks of G and G_2 as follows ϕ(h_i)=h'_i.Using this bijection and Assumption (ii) of G conclude that if h∈ U_k(G) is the common parent of {h_j_1,...,h_j_s}⊆ U_k-1(G), thenϕ(h)∈ U_k(G_2) should be the common parent of {ϕ(h_j_1),...,ϕ(h_j_s)}⊆ U_k-1(G_2) and the proof is complete.§ PROOF OF LEMMA <REF>Suppose that o_i is the unique observed node of a latent node h_i. Then, for any o_j such that l_i=l_j, if h_i is not a child of o_j, then from assumption ii we have R_j⊈R_i. If h_i is a child of o_j, then since we know that l_i=l_j, M_i⊆ M_j and R_i=R_j. Now, suppose that the observed node o_i satisfies conditions but it is not unique parent of any latent node. Let h_i and h_i^' be children of o_i. At least one of them, say node h_i, can reach an observed node by a path of length l_i-1. If h_i^' has the same property, then consider the unique observed parent of h_i^', say node o_j. Based on Assumption (ii), we have R_j⊆ R_i, which is in contradiction with the assumption that node o_i satisfies conditions of Lemma. Moreover, if h_i^' does not have a path to observed node with a length of l_i-1, then for any observed parent of h_i, one of the conditions in the Lemma is not satisfied. Thus, the proof is complete.§ PROOF OF PROPOSITION <REF>Notice that the first loop in Algorithm <ref> uses the result of Lemma <ref> and finds all the latent nodes and their corresponding unique observed parents. The next loop uses the fact that the latent sub-network is a tree and also it satisfies Assumption <ref>. Hence, if there exist two latent nodes h and h', one with depth l and the other one with depth l+1, such that R_h⊆ R_h', then h' must be the parent of h in the latent sub-network. Moreover, since each latent node has a unique observed parent, using A^*_1, Algorithm <ref> can identify all the observed children of a latent node. Finally, the last loop in this algorithm locates the rest of observed nodes as the input of the right latent nodes. The algorithm does it by using the fact that if an observed node i shares a latent child with another observed node j∈ U, then M_j⊆ M_i. Clearly, if the true unobserved network satisfies Assumption <ref>, the output of this algorithm will have a latent sub-network that is a tree and consistent with the linear measurement. Thus, by the result of Theorem <ref>, it will be the same as the true unobserved network up to some permutations in Supp(A_21). § PROOF OF THEOREM <REF>Consider the instance of the problem where A_22=0_m× m. Without loss of generality, we can assume that entries of A_12 and A_21 are just zero or one. Thus, we need to find [A_12]_n× k and [A_21]_k× n such that Supp(A_12A_21)=Supp(A_1^*) and k is minimum. We will show that the set basis problem <cit.> can be reduced to the decision version of finding the minimal unobserved network which we call it the latent recovery problem. But before that, we define the set basis problem:The Set Basis Problem <cit.>: given a collection 𝒞 of subsets of a finite set U={1,⋯,n} and an integer k, decide whether or not there is a collection ℬ⊆ 2^U of at most k sets such that for every set C∈𝒞, there exists a collection ℬ_𝒞⊆ℬ where ⋃_B∈ℬ_C B=C.Any instance of the basis problem can be reduced to an instance of latent recovery problem. To do so, we encode any set C in collection 𝒞 to a row of A_1^*=A_12A_21 where i-th entry is equal to one if i∈ C, and otherwise zero. It is easy to verify that the rows of matrix A_21 correspond to sets in collection ℬ if there exist a solution for the basis problem. Since the basis problem is NP-complete, we can conclude that finding the minimal unobserved network is NP-hard.§ PROOF OF THEOREM <REF>Consider a minimal unobserved network G_min. Pick any latent node i^' which its in-degree or out-degree is greater than one. Let V^-_i^' and V^+_i^' be the sets of nodes that are going to and incoming from node i^', respectively. We omit the node i^' and create |V_i^'^-|×|V_i^'^+| latent nodes {i^'_j^'k^'| j^'∈ V_i^'^-, k^'∈ V_i^'^+}. We also add a direct link from node j^'∈ V_i^'^- to i^'_j^'k^' and from i^'_j^'k^' to k^'∈ V_i^'^+ in order to be consistent with measurements. We continue this process until there is no latent node with in-degree or out-degree greater than one. Since there exists at most one path with length k from any observed node to another observed node, the resulted graph is exactly equal to graph G_0. Hence we can construct the minimal graph G_min just by reversing the process of generating latent nodes from G_min to merging latent nodes from G_0. But the NM algorithm consider all the sequence of merging operations. Thus, G_min would be in the set 𝒢_out and the proof is complete.
http://arxiv.org/abs/1702.08575v3
{ "authors": [ "Saber Salehkaleybar", "Jalal Etesami", "Negar Kiyavash", "Kun Zhang" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170227230028", "title": "Learning Vector Autoregressive Models with Latent Processes" }
firstpage–lastpage 2016Quantum phase transitions of light in a dissipative Dicke-Bose-Hubbard model Wu-Ming Liu December 30, 2023 ============================================================================ We use simulated planetary systems to model the planet multiplicity of Kepler stars. Previous studies have underproduced single planet systems and invoked the so-called Kepler dichotomy,where the planet forming ability of a Kepler star is dichotomous, producing either few or many transiting planets.In this paper we show that the Kepler dichotomy is only required when the inner part of planetary disks are just assumed to be flared. When the inner part of planetary disks are flat,we reproduce the observed planet multiplicity of Kepler stars without the need to invoke a dichotomy. We find that independent of the disk model assumed, the mean number of planets per star ≈2 for orbital periods between 3 and 200 days,and for planetary radii between 1 and 5 Earth radii. This contrasts with the Solar System where no planets occupy the same parameter space. exoplanets, Kepler, inclinations, multiple-planet systems, invariable plane§ INTRODUCTIONThe Kepler Q1-Q16 catalog <cit.> uses 47 months of Kepler data collected from ∼190,000 stars.This has resulted in the detection of over 4,000 planet candidates orbiting ∼3,200 stars.While the majority of the ∼3,200 stars contain a single detected planet,transit signals from multiple planets have been detected around 656 of these stars. Comparisons to the architecture of the Solar System are limited, due to the relatively smaller periods and planetary radii that Kepler can efficiently sample. §.§ The Kepler DichotomyThe mutual inclination distribution between planets around Kepler stars has been well studied (<cit.>, see Appendix <ref>).The majority of of these studies show good agreement between simulated planets and the Kepler sample when the orbital planes of simulated planets are closely aligned. Specifically, when the mutual inclinations between planets are drawn from a Rayleigh distribution (a `flared' planetary disk) with a mode of the flare angle between ∼1^∘-5^∘. In contrast to the agreement for mutual inclinations,some studies report a significant underproduction of simulated systems with a single detected planet <cit.>.These studies underproduce the number of simulated stars with a single detected transiting planet by a factor of ∼3.The underproduction of simulated systems with a single detected transiting planethas led to the proposal of dichotomous planetary systems in the Kepler field, the so-called Kepler Dichotomy. One population of planetary systems is required to either suppress planet formation, or be “dynamically hot” <cit.>,where mutual inclinations between planets are increased, or where planets are more likely to be ejected from the system.For the host stars in these planetary systems, the probability of detecting multiple transiting planets is reduced,leading to a higher proportion of stars with a single detected transiting planet in this population. Potential explanations for the dynamically hot planetary system population include dynamical instability caused by high mass planets <cit.>,instability or suppressed planet formation caused by stellar binaries <cit.>, varying surface density profiles and disk masses <cit.>, varying strengths of gas depletion or spin-orbit misalignment between the star and planet <cit.>. <cit.> show that to account for the excess of detected single-planet transiting systems around M dwarfs, these stars with a reduced probability of multiple transiting planets need to account for ∼55 per cent of M dwarfs in the Kepler field. §.§ Detected Transiting Planets We define the true planetary system multiplicity vectoras the number of stars which are host to k planets. For the Kepler mission (and any transit survey),the observed vectorwill be significantly lower than the true ,due to the low probability of planets transiting their host star,and since Kepler can only efficiently detect planets across a small fraction of parameter space.The observed planetary system multiplicity vector, hereafter simply referred to as the multiplicity vector, is given by= [N̂_̂1̂,N̂_̂2̂,N̂_̂3̂,..],where N̂_̂1̂, N̂_̂2̂ and N̂_̂3̂ are the the number of stars with 1, 2 and 3 detected transiting planets respectively, and so on. For the Kepler Q1-Q16 catalog (see Section 2.1), 1≤ k ≤ 6 and =[2608, 413, 141, 52, 18, 3]. §.§ Mutual InclinationsFor two or more planets in the same planetary system, the mutual inclination between those planets is defined as the angle between their orbital planes. The probability of multiple planets transiting the same star is non-negligible for small mutual inclinations only, generally on the order of a few degrees. Planets in the system with larger mutual inclinations, relative to the transiting planets, require alternative detection methods. In general, the true inclination of the orbital plane of a transiting planet cannot be determined from a transit lightcurve alone. The transit method is only sensitive to the line-of-sight component of the inclination i (Figure <ref>).In Figure <ref>, the distribution of the detectable inclination component for a set of simulated planets is shown in the bottom panels.The orthogonal component of inclination, typically not detectable by the transit method, represents the y-axis of the middle panels. The true mutual inclination between a pair of planets is given by √(Δθ^2+Δθ_(y-z)^2). §.§ An Alternative Disk Model In the studies mentioned in Section <ref>, the true mutual inclinations between simulated planets are drawn from a Rayleigh distribution with mode σ_Δϕ.The Rayleigh distribution is composed of two Gaussian distributed components, with standard deviations equal to σ_Δϕ. We can visualize the inclination distribution by considering one of these Gaussian components, i.e. viewing systems edge-on at an arbitrary plane perpendicular to the invariable plane, as in the top panels of Figure <ref>.Rayleigh distributed mutual inclinations represent a `flared disk' model, where a planet's height above the invariable plane[ The mode of the Rayleigh distribution of inclinations relative to an invariable plane σ_ϕ,is related to the Rayleigh distribution of mutual inclinations σ_Δϕ, by σ_ϕ≈σ_Δϕ/√(2). ] tends to increase with increasing semi-major axis.Planet inclinations relative to the invariable plane do not depend on semi-major axis.In this paper, we use a `flat disk' model, where a planet's height above the invariable plane does not depend on semi-major axis, and planet inclinations relative to the invariable plane tend to decrease with increasing semi-major axis (as seen in the right-side panels of Figure <ref>). <cit.> tested the in situ assembly of close in planets,and found that planets with small semi-major axes tended to have larger inclinations, particularly <0.1 AU. We apply this flat disk model to the typical semi-major axis space probed by Kepler,i.e. the interior part of planetary disks, as shown in Figure <ref>. In general, this represents planets with semi-major axes much less than the semi-major axes of the inner Solar System planets.We improve on previous modeling efforts by removing the flared disk assumption. We show that for a flat inner planetary disk there is no need to invoke a dichotomous planetary system population,where one population of host stars have a decreased probability of hosting multiple transiting planets.In Section <ref> we define our stellar and planetary samples, based on minimizing false positives and false negatives. In Section <ref> we estimate the transit and detection completeness across our parameter space. In Section <ref> we estimate the underlying orbital period and planet radius distributions. We outline the process of producing model planetary populations in Section <ref>. In Section <ref>, we compare the simulated detections in our model systems to the Kepler Q1-Q16 candidates, for both flat and flared disk models. In Section <ref> we discuss the results from our model planet populations,including estimates for the mean number of planets per star within our parameter space.§ SAMPLE SELECTIONWhen we generate a model planetary system, the stellar properties for that system are assigned from a random Kepler star in our stellar sample. We produce our input stellar sample in the following way.We begin with the 198,917 stars from the Kepler Q16 stellar catalog[ http://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblSearch/nph-tblSearchInit?app=ExoTbls&config=keplerstellar].We limit our sample to low-noise Solar type stars, similar to the majority of previous studies mentioned in Section <ref>. We apply the following cuts to the input catalog:K <T_eff<K σ_CDPP_45 <ppmR_*< R_⊙T_baseline >daysf_duty >where T_eff and R_* are the stellar effective temperature and radius respectively.The 4.5 hour CDPP (combined differential photometric precision, <cit.>) of the star,a measure of the combined instrumental and stellar noise, is given by σ_CDPP_45. T_baseline is the timespan of observations for each star and f_duty is the fraction of valid observations over T_baseline. Note that the combination of T_baseline > 1000 days and f_duty > 0.6 generally ensures at least 3 transits for orbital periods up to 200 days. The above stellar cuts, in addition to removing stars without a reported mass, results in our input stellar sample ofstars. In later sections, the stellar properties of each simulated planetary system are assigned from a randomly drawn star in this sample.To minimize the detection incompleteness and false-positives in our observed planet sample, to which we will compare our simulations, we select only those planets with a high pipeline detection efficiency. We begin with theKepler Objects of Interest (KOIs) labeled `candidate' by the Q1-Q16 pipeline[ http://exoplanetarchive.ipac.caltech.edu/cgi-bin/TblView/nph-tblView?app=ExoTbls&config=koi]. An additionalKOIs are labeled `not dispositoned'. We update these dispositions using the Kepler Q17 catalog for reference. This results inKOIs changing from `not dispositioned' to `candidate'.These planets form our initial sample ofplanet candidates from the Q1-Q16 catalog <cit.>. To conform with our input stellar sample, we remove planets around host stars outside of our stellar parameter space defined by Equation <ref>. This reduces our sample of observed planets fromto . We set an upper orbital period limit ofdays to avoid the increase in false-positives towards the Kepler orbital period of ∼372 days <cit.>,and to remain consistent with the Kepler pipeline completeness calculations <cit.>.The Kepler pipeline is known to have an increasing false-negative rate with decreasing orbital period for orbital periods ≲ 3 days. This is largely due to the pipeline harmonic filter, which can remove transit signals which are on the same timeframe as the expected stellar noise <cit.>. In addition, a small fraction of the fitted planetary radii for planets with orbital periods ≲ 10 days can be significantly lower than the true planet radius, diluting the transit signals for some of these planets. We choose a orbital period lower limit ofdays, in order to retain a sample of Kepler stars with ≥ 4 transiting planets.The Kepler pipeline reports a summary statistic for the strength of a transit detection, the Multiple Event Statistic (MES).A lower limit planet radius of R_⊕ and a lower limit MES ofare chosen since false-positives are dominated by low MES (≲ 8) detections <cit.>.An upper planet radius limit of R_⊕ is chosen to avoid increasing false-positives with planet size, and since the mass-radius relation becomes degenerate for larger planetary radii. To summarize, we only retain the Kepler Q1-Q16 candidates which meet the following criteria:days <P <daysR_⊕ <R_p <R_⊕M ES >This results in our observed sample ofcandidates inplanetary systems, within the parameter space outlined in Equations <ref> and <ref>. The observed planetary system multiplicity vector(Equation <ref>) for our parameter space is given by= . § TRANSIT AND DETECTION EFFICIENCYWhen attempting to estimate the underlying multiplicity vectorgiven the observed(Equation <ref>),there exists a degeneracy between the underlying multiplicity distribution and the underlying mutual inclination distribution. For example, an observedcould be reproduced by systems which contain many planets with a large dispersion in mutual inclinations,or by systems containing fewer planets but with a small mutual inclination dispersion.These two underlying distributions must be modeled simultaneously.We estimate the underlying inclination and multiplicity distributions of Kepler systems in the Q1-Q16 catalog,within the period and radius parameter space where Kepler can more reliably detect transiting planets(given by Equations <ref> and <ref>). We produce sets ofsimulated planetary systems across a grid of inclination and multiplicity distributions. For each set of model assumptions, we estimate the probability of Kepler detecting each simulated planet.By comparing thefor a set of simulated systems to thefor the Q1-Q16 Kepler catalog,we can estimate the underlying architecture between planetary orbital planes and the distribution of the number of planets per star. §.§ Pipeline detection efficiencyDetection incompleteness and false-positives are important issues when comparing the detected planets around simulated and observed stars.Previous studies did not have the benefit of the Kepler pipeline detection completeness provided by transit injection and recovery experiments <cit.>, shown in Figure <ref>.We follow the approximation of the pipeline MES by <cit.>, which includes a limb-darkening approximation and accounts for nonzero impact parameters.MES = 0.84 δ(c+s√(δ))/σ_cdpp√(n_tr)where c=1.0874 and s=1.0187 for G dwarfs,δ= (R_p/R_*)^2, and n_tr is the number of transits. Values of σ_cdpp are reported for 14 different transit durations from 1.5 hours to 15 hours for each Kepler star <cit.>. The σ_CDPP value chosen for Equation <ref> is interpolated from the 14 reported CDPP values,to match the transit duration of the planet.The number of transits for a planet is estimated by n_tr=(T_baseline× f_duty) / P, where P is the planet period, T_baseline is the total observing time for the Q1-Q16 catalog (∼1426 days)and f_duty is the duty cycle for the observed star; the fraction of valid observations over the observing baseline. Note that T_baseline and f_duty are reported for each star, accounting for systematics such as the differences in CCD detectors and pixels.We define η_detect as the Kepler pipeline completeness, shown in Figure <ref>. The pipeline completeness as a function of the multiple event statistic is approximately represented by the Γ cumulative distribution functionη_detect(MES) = 1/c^b Γ(b)∫_0^MES-β x^b-1e^-x/cdxwhere Γ is the Gamma function. For our sample of FGK dwarfs, b=4.35, c=1.05 and β=4.093 <cit.>.To calculate the total probability of transit detection η(P,R_p),we must also take into account the geometric transit probability η_transit of a planet,η_transit = R_*/a,where a is the semi-major axis of the planet. The product of these two equations gives the total transit detection probability η(P,R_p), the probability of the planet transiting η_transit, and the probability of the transiting planet being detected by the Kepler Q1-Q16 pipeline η_detect,η(P,R_p) = η_transit×η_detect Given an input star from our stellar sample (Equation <ref>) and using Equations <ref> to <ref>,we can estimate the total transit and detection completeness of a simulated planet with planet parameters of period, radius and transit duration. Across our planetary parameter space used in this paper (Equation <ref>), we estimate the mean total transit and detection completeness ⟨η(P,R_p)⟩, by taking the mean value of η(P,R_p) at each grid point over all stars in our stellar sample. This is shown in Figure <ref>, where ⟨η(P,R_p)⟩ ranges from ∼0 to a maximum of ∼0.1. Transit and detection probabilities >5 per cent only exist for planets with orbital periods ≲8 days. It can be seen that the pipeline detection probability becomes important for planetary radii less than 2.5 Earth radii.§ UNDERLYING PLANET DISTRIBUTIONSOur simulated results of the number of stars with k detected transiting planets is reliant on an input planet radius and orbital period distribution,which has been one of the primary goals of the Kepler mission.The planet radius distribution is often modeled as a broken power law <cit.>,with a logarithmic plateau at ≲ 2.5R_⊕. This logarithmic plateau is also seen when the pipeline efficiency is probed using transit injection and recovery experiments <cit.>.For orbital periods between 50 and 300 days, a single power law is sufficient to describe the orbital period distribution <cit.>.Our parameter space includes planets with orbital periods less than 50 days,where the transit and detection completeness is more dynamic, particularly for periods ≲15 days (Figure <ref>). For this parameter space, a single power law is not sufficient,and we model the orbital period distribution as a broken power law. The planet radius and orbital period distributions are combined into a planet distribution function (PLDF), in this case composed of a broken power law for the distribution of orbital periods,and a broken power law for the distribution of planetary radii. Our PLDF has 7 free parameters, F_0, β_1, β_2, P_brk, α_1, α_2, R_brk,where F_0 is the number of planets per star within our parameter space,and R_brk and P_brk are the transition points between the two power laws for the planet radius and orbital period respectively, df/dP dR_p = C F_0g(P,R_p) = C F_0 P^β_1 R_P^α_1 if P < P_brk and R_p < R_brk C F_0 P^β_1 R_P^α_2 R_brk^α_1-α_2 if P < P_brk and R_p ≥ R_brkC F_0 P^β_2 P_brk^β_1-β_2 R_P^α_1 if P ≥ P_brk and R_p < R_brkC F_0 P^β_2 P_brk^β_1-β_2 R_P^α_2 R_brk^α_1-α_2 if P ≥ P_brk and R_p ≥ R_brkwhere the power law exponents α_1, α_2 and β_1, β_2 represent the exponents for the orbital period and the planet radius distribution respectively, either side of the power law breaks. For each set of model parameters in an underlying planet distribution function,an expected number of planet detections is computed by convolving the planet distribution function with ⟨η(P,R_p)⟩(Figure <ref>). The number of expected detections for an underlying planet distribution function is then compared to the number of Kepler Q1-Q16 detections by maximizing the Poisson likelihood of the PLDF. The maximum likelihood derivation for our PLDF (Equation <ref>) is shown in Appendix <ref>,with maximum likelihood parameters ofMaximum likelihood PLDF parameters F_0 β_1 β_2 P_brk α_1 α_2 R_brk 0.852 1.007 -0.932 15.332 -1.168 -4.906 2.740 indicating breaks in the power law distributions at ∼15 days and ∼2.7 R_⊕ for orbital periods and planetary radii respectively. The break in the orbital period distribution corresponds to a peak in the distribution, whereas the break in the planet radius distribution corresponds to the logarithmic planet plateau for R_p≲2.7 R_⊕. These results appear to be consistent with <cit.>,where breaks in the logarithmic orbital period and planet radius rates are indicated at ∼20 days and ∼2-3 Earth radii respectively.We can marginalize our maximum likelihood PLDF in terms of orbital period and planet radius. This is shown in Figures <ref> and <ref> respectively, where the thick red lines represent our marginalized maximum likelihood PLDF,which is the estimated underlying planet distribution.The dashed red lines indicate the corresponding transit detected distribution,after applying the mean total transit and detection probability ⟨η(P,R_p)⟩ for the stars in our sample.When producing model planetary systems in Section <ref>, we assign orbital periods and planetary radii by drawing randomly from the maximum likelihood underlying distributions, shown in Figures <ref> and <ref>. §.§ Parameterizing Planet multiplicityFor a set of model planetary systems, we need to assume a distribution for the inherent number of planets per star within our parameter space, .In this paper we trial two differentdistributions. The first trial distribution is a modified Poisson distribution N_pl,Poi <cit.>.Each star is assigned a random number of planets, drawn from a Poisson distribution with mean .Stars drawn with zero planets are redrawn from the same Poisson distribution,until allmodel planetary systems are populated with planets, resulting in a final mean ≥. The second trial distribution is a modified Exponential distribution N_pl,exp <cit.> and is produced in the same way as N_pl,Poi,except that stars are assigned a number of planets drawn from an exponential distribution with mean . The mode of the exponential distribution is always 0,resulting in a natural tendency for more planetary systems to contain a single transiting planet rather than multiple transiting planets. It has been shown that when an exponential distribution is used to model the inherent number of planets per star,no Kepler dichotomy is required <cit.>. We include this trial distribution for comparative purposes.§ PRODUCING MODEL PLANETARY SYSTEMS In our simulations, we assume two different planetary disk models,and two different distributions for the number of planets per star ,resulting in simulations with four unique combinations of model assumptions.For a given set of model assumptions, we generate populations of planetary systems across a grid. The mean of the number of planets per starranges from 0.5 to 3.5 in steps of . For the flared disk model, the mode of the Rayleigh distributed mutual inclinations ranges from 0 to 5 degrees in steps ofdegrees. Similarly for the flat disk model, the standard deviation of the height above the invariable plane ranges from 0 to 5 R_* in steps ofR_*.This results in a total ofgrid points for each set of model assumptions,withmodel planetary systems generated at each grid point. Each model planetary system is produced as follows:(Space*=0.1cm) A random star is chosen from our sample ofKepler stars outlined in Section <ref>, and its mass and radius are assigned to the star in the model system.The angle to the system's invariable plane relative to the observer(Figure <ref>),is chosen from a random point on a sphere: 0 ≤cos[] ≤ 1.The number of planets in the systemis drawn randomly, according to the assumeddistribution from Section <ref>, with a mean value based on the current grid point.The radii of theplanets are drawn from the underlying distribution in Section <ref> (R_⊕ < R_p <R_⊕), and converted to their corresponding masses[M_p≈(R_p/a)^b, where a∼1.11 and b∼2.41.].The periods of theplanets are drawn randomly from the underlying distribution in Section <ref> (days< P<days), and converted to their corresponding semi-major axes, using the stellar properties of the assigned Kepler star.The dynamical stability of the system is estimated by testing the stability of sets of 3 sequential planets, or pairwise if =2.If any set of planets in the system is deemed unstable, the system is labeled unstable and new planet periods for allplanets are redrawn as in step 5. See Section <ref> for a complete description of estimating the stability of a system, including termination criteria. The inclinations of the planetary orbital planes relative to the observer,are determined from(step 2) and the assumed planetary disk model (Section <ref>), and the parameter value at the current grid point. See Appendix <ref> for a complete description of how planet inclinations are assigned for flat and flared disk models. The above steps generate themodel systems according to the assumed disk model,the assumed planet multiplicity distribution,the current grid point parameters,and the underlying planet period and radius distributions. The final step is to estimate which simulated planets would be detected by the Kepler Q1-Q16 pipeline,and compare this detected sample to the observed Q1-Q16 detections. §.§ Determining transiting and detected planetsOnce planetary inclinations are assigned, the model system is complete and we test for transiting planets.We define a transiting planet by its impact parameter b, where a planet is defined to transit if:b =a/R_*cos i ≤ 1where R_*, a and i were determined from steps 1, 5 and 7 respectively. For each simulated transiting planet, we estimate the Multiple Event Statistic (MES, Equation <ref>). The MES is dependent on stellar properties, along with the planet's orbital period, radius and transit duration. Circular orbits are assumed when estimating transit durations.The planet's MES is then used to estimate the pipeline detection efficiency η_detect (Equation <ref>). For each simulated transiting planet, a uniform random number Y_m is drawn between 0 and 1. A simulated planet is labeled as detected if it transits,and if its pipeline detection efficiency η_detect > Y_m. All simulated planets which meet this criteria are added to the detected planet sample for the grid point, X_ij,where i and j represent the current grid point.§ COMPARING SIMULATED AND OBSERVED PLANET DETECTIONSThe simulations outlined in Section <ref> were performed across a grid for the 4 sets of model assumptions.For each grid point, the simulated planet detections X_ij are used to generate two distributions, the system multiplicity vector(Equation <ref>), and the distribution of orbit normalized transit duration ratios<cit.>. Unlike thedistribution,thedistribution only consists of model systems with two or more detected transiting planets. For a pair of planets orbiting the same star,ξ=T_dur,in/P_in^1/3/T_out,in/P_out^1/3,where T_dur and P are the transit durations and the periods for the inner and outer planets,given by the subscripts in and out respectively.For each unique planet pair in a system, ξ is calculated,giving (+1)/2 values of ξ for a star withplanets. For each grid point we generate the ensemble _ij distribution by calculating the ξ value for each unique pair of simulated transit detections orbiting the same star,across allmodel systems.For a deeper discussion of ξ, see Appendix <ref>.Theanddistributions are compared to theanddistributions of the Kepler Q1-Q16 candidates,and are used to assess the goodness of fit at each grid point. We perform a χ^2 goodness of fit test (Equation <ref>)comparing the simulatedto the observedfor our parameter space (Equation<ref>). We scalesuch that ∑=∑. To compensate for the poor quality of the χ^2 test with low cell counts, values less than 5 are merged into their adjacent cells.χ^2 = ∑^n_k=1( - )^2/. Similarly, we perform a two-sample Kolmogorov-Smirnov (KS) test between the simulated _ij distribution at each grid point, and thedistribution of the observed Q1-Q16 Kepler candidates within our parameter space.§.§ Flared Disk and Poisson distributed planets per star In the top panel of Figure <ref>,is compared toat each grid point, under the assumption of a flared planetary disk and a Poisson distributed number of planets per star. The χ^2 values are represented by the 1σ, 2σ and 3σ values relative to the best fit. As expected, no good fit is found to thedistribution,as is the case in the majority of previous studies <cit.>.The bottom panel displays the resulting p-values from the KS test between the _ij anddistributions. The orbital normalized transit duration ratios favor mutual inclinations with a mode between 1.5-4 degrees, consistent with all previous studies shown in Appendix <ref>.The mean number of planet per star cannot be determined from thedistribution alone.It is clear from Figure <ref> that the best-fit regions (dark red) of the two tests do not appear consistent.Comparing multiplicity vectors favors near coplanar mutual inclinations, with a mode ≲1^∘ (top panel). However, modes ≲ 1.5^∘ are ruled out by comparing orbit normalized transit duration ratios (bottom panel). §.§ Flat Disk and Poisson distributed planets per starFor the set of simulations with a Poisson distributed N_pl and a flat disk model, the two tests appear more consistent (Figure <ref>). Unlike for the flared disk model, the best fitis a good match to , giving a χ^2/dof of 0.9, indicating that no Kepler dichotomy is required.While Gaussian disk thicknesses >2 R_* are supported by comparing multiplicity vectors, comparing orbit normalized transit durations refines the disk thickness to 1 R_*≲ Z_0 ≲ 2 R_*.There is significant overlap between the two tests within this region.§.§ Combining independent tests The results from comparing theanddistributions can be combined in order to estimate the overall best-fit parameters, for a given set of model assumptions. The p values from each test are combined using Fisher's method into a single test statistic,χ^2_combined≈ -2 ∑_m=1^M ln p_mwhere M is the number of tests combined and p_m is the p-value of the mth test.The degrees of freedom is given by 2M, where in this case M=2. We use this combined statistic to produce a probability grid, P_combined, across the parameter space for each set of model assumptions. P_combined is derived from the likelihood P_ij∝exp (-χ_combined,ij^2/2) and the requirement ∑ P_ij = 1.Figure <ref> displays the probability grids for each set of model assumptions, along with the best-fit point and the 1σ and 2σ probability contours. Panel a) of Figure <ref> combines the tests of Figure <ref>, and Panel b) combines the tests of Figure <ref>. A similar process is involved for panels c) and d), where the number of planets per star N_pl is drawn from an exponential distribution. The intermediate figures for these two panels are not displayed for succinctness. § RESULTS AND DISCUSSION §.§ Flared Disk ModelOur result for a flared disk with a Poisson distributed number of planets per star,appears to be compatible with the majority of previous analyses. We find a mean number of planets per star =2.0^+0.3_-0.2, over our parameter space, [3 days <P<200 days] and [1 R_⊕<R_p<5 R_⊕]. For a similar orbital period and planet radius parameter space, <cit.> find =2.2±0.3 for Kepler M dwarfs. <cit.> report ∼1.5 for R_p>1.5 R_⊕, where the reduction inlikely comes from the exclusion of planets with radii between 1.0 R_⊕ < R_p < 1.5 R_⊕.Similarly, we find the mode of the Rayleigh distributed mutual inclinations is given by σ_i = 2.3^+0.9_-0.4 degrees,consistent with the bulk of previous results with σ_i∼2^∘ <cit.>.We are unable to achieve a good match to the Kepler Q1-Q16 detections for a flared disk model, contrary to the reported result by <cit.>, where a flared disk model reproduced thedistribution without the need for a Kepler dichotomy. The discrepancy likely comes from the unique N_pl distribution chosen by <cit.>, a “bounded uniform” distribution. The bounded uniform distribution is produced by first choosing a maximum numberof planets n_i,max from a Poisson distribution,then choosing the number of planets in the system n_i from a uniform distribution between 1 and n_i,max.It has previously been shown that the Keplerdistribution can be matched without the need for a Kepler dichotomy, when N_pl is drawn from an exponential distribution <cit.>. We find that the Kepler sample is consistent with σ_i = 2.4^+0.9_-0.5 degrees and = 1.6^+0.3_-0.2 drawn from an exponential distribution. Here we disagree with <cit.>, who preferred near-coplanar mutual inclinations. While we also achieve good fits for near-coplanar orbital planes,comparingdistributions strongly rules out mutual inclinations ≲ 1.4^∘. This illustrates the importance of modeling bothanddistributions, where <cit.> only modeled thedistribution. §.§ Flat Disk Model For a flat planetary disk model <cit.>,a good fit to the Kepler candidates can be achieved when the number of planets per star is drawn from both a Poisson or exponential distribution. That is, independent of the N_pl distribution chosen, the flat disk model removes the need for a Kepler dichotomy.When N_pl is drawn from a Poisson distribution, we find Z_0 = 1.6^+0.6_-0.4 R_* and = 2.4^+0.6_-0.4. Notably, the mean number of planets per staris consistent between the assumed planetary disk models. We use a flat planetary disk model with a Gaussian disk thickness Z_0.We can compare this value to the inner Solar System (Figure <ref>). For the inner Solar System planets, Z_max≈8 R_*, giving Z_0≈5 R_*, where Z_0≈2Z_max/π.This is significantly larger than our derived value of Z_0=1.6^+0.6_-0.4 R_* for our sample of closely packed Kepler systems.This may give some indication of the flat disk model's applicability at larger semi-major axes, or may be reflective of the different parameter spaces probed.§ SUMMARY AND CONCLUSION We estimate the inherent orbital period and planet radius distributions for the Kepler Q1-Q16 catalog, within the parameter space [3 days <P<200 days] and [1 R_⊕<R_p<5 R_⊕]. We find that both distributions are well described by broken power laws, with breaks occurring at ∼15 days and ∼2.7 R_⊕. These inherent distributions are used to populate model planetary systems for flat and flared planetary disk models, and for the number of planets per star N_pl drawn from Poisson and exponential distributions.We confirm that a flared planetary disk model with N_pl drawn from a Poisson distribution,is not consistent with the Kepler detections.We also confirm that Kepler detections are well matched when N_pl is drawn from an exponential distribution, without the need to invoke a dichotomous planetary system population. In this paper we use a flat inner planetary disk model,where planets with smaller periods tend to have larger inclinations. When a flat rather than a flared planetary disk model is assumed, model systems are consistent with Kepler detections,without the requirement of a Kepler dichotomy, and independent of the chosen N_pl distribution.We find that the mean number of planets per staris largely model independent, ∼2.0 when N_pl is drawn from a Poisson distribution, and ∼1.6 when N_pl is drawn from an exponential distribution, for [3 days <P<200 days] and [1 R_⊕<R_p<5 R_⊕]. This contrasts with the Solar System where there are 0 planets within this parameter space.Similarly, we find for a flared planetary disk model, mutual inclinations are distributed with a mode ∼2.2^∘. For a flat planetary disk model, the Gaussian disk thickness Z_0∼1.5 R_*, much lower than the ∼5 R_* of the inner Solar System. §.§ The Kepler Dichotomy The underproduction of model systems with a single detected transiting planet has been well studied. This has lead to the invocation of a dichotomous planetary system population, where one population suppresses the number of detected transiting planets,resulting in a higher likelihood of producing a single detected transiting planet. Many physical explanations for the existence of the dichotomy have been put forward <cit.>.<cit.> generated sets of planetary systems with various gas depletion factors using N-body simulations of planetary embryos.No set of simulations was a good match to the period ratio, Δ (Equation <ref>), planet multiplicity and ξ distributions of the observed Kepler sample. Some improvement was found when simulated planetary systems were allowed to be a mix of “dynamically hot” and “dynamically cold systems”.However, this improvement becomes less pronounced when taking into account the partial correlations between these distributions,particularly between ξ and Δ. It has also been shown that the requirement of the dichotomy is not robust to the assumed distribution for the number of planets per star <cit.>. This is confirmed in this paper, and in addition,we show that a planetary system dichotomy is also not required for a flat inner planetary disk model. This result is independent of the choice of distribution for the number of planets per star N_p. We emphasize that we apply the flat planetary disk model only to the short period range of Kepler candidates.Of the sets of model assumptions explored in this paper, the need for a Kepler dichotomy only exists for a flared inner planetary disk, with the number of planets per star drawn from a Poisson distribution.The Kepler dichotomy describes the apparent need for a dichotomous planetary system population, with respect to a star's probability of producing multiple transiting planets. We show that the Kepler dichotomy is only required under specific model assumptions. Specifically, when the inner part of a planetary disk is assumed to be flared,while also requiring the number of planets per star to be Poisson distributed.When removing either or both of these assumptions, the need for a Kepler dichotomy disappears. mn2e § INCLINATION ANGLES OF PLANETARY ORBITAL PLANETSThere are a number of different angles used in the literature which have all been referred to as the planet inclination.Where we have used an inclination angle, we have attempted to be as explicit as possible.The below figure illustrates different inclination angles used throughout the paper. § PLANET DISTRIBUTION FUNCTIONOur planet distribution function (PLDF) has 7 free parameters, F_0, β_1, β_2, P_brk, α_1, α_2, R_brk.df/dP dR_p = C F_0g(P,R_p) = C F_0 P^β_1 R_P^α_1 if P < P_brk and R_p < R_brk C F_0 P^β_1 R_P^α_2 R_brk^α_1-α_2 if P < P_brk and R_p ≥ R_brkC F_0 P^β_2 P_brk^β_1-β_2 R_P^α_1 if P ≥ P_brk and R_p < R_brkC F_0 P^β_2 P_brk^β_1-β_2 R_P^α_2 R_brk^α_1-α_2 if P ≥ P_brk and R_p ≥ R_brkwhere F_0 is the number of planets per star within our parameter space, R_brk and P_brk are the transition points between the two power laws for the planet radius and orbital period respectively.The normalization constant C is calculated from the requirement∫_R_min^R_max∫_P_min^P_max Cg(P, R_p)dPdR_p = 1where the integration limits R_min, R_max, P_min and P_max are given in Equation <ref>.We follow <cit.> and <cit.> by implementing a Poisson likelihood for our PLDF. By maximizing this likelihood we can obtain best-fit parameters for our model. ln(L) ∝[ ∑^N_pl_i=1ln(CF_0g(P,R_P) )]- N_expwhere N_exp is the expected number of planet detections for the set of model parameters, and is given byN_exp = CF_0 ∫_R_min^R_max∫_P_min^P_max[∑_j=1^N_*η_j(P, R_p)] g(P,R_P)dP dR_pwhere η_j(P, R_p)=η_detect×η_transit is the combined transit and pipeline detection efficiency of the jth star for the specified period and radius.The pipeline detection efficiency η_detect is given by Equation <ref> and the transit probability η_transit=R_*/a, where a is the semi-major axis.We calculate [∑_j=1^N_*η_j(P, R_p)] for a grid in orbital period and planet radii, in bins of 1.5 days and 0.05 R_⊕ respectively. For each grid point, we sum over all stars in our sample.The mean combined transit and pipeline detection efficiency ⟨η(P,R_p)⟩ can then be found be dividing this term by the number of stars in our sample, N_*. § SIMULATED PLANETARY SYSTEMS §.§ Testing the stability of sequential planet pairs The dynamical spacing Δ describes the separation of two planets in units of their mutual Hill radius.The mutual Hill radius of two planets is given byR_H,ij=(m_i+m_j/3M_*)^1/3a_i+a_j/2where m_i and m_j are the planet masses for the inner and outer planets respectively.The dynamical spacing Δ is the semi-major axis spacing of the two planets, in units of the mutual Hill radius.Δ_ij=a_j-a_i/R_H,ijwhere a_i and a_j are the semi-major axes of the inner and outer planets respectively. Analytic stability solutions exist for a system which contains exactly two planets, Δ_ij≳3.46 <cit.>,although it is not possible to ensure this requirement for our simulated systems[ Although our simulated systems may produce exactly two planets within our parameter space,we cannot rule out the possibility of additional planets outside of our parameter space,which would invalidate the analytic solution.]. For systems with ≥ 3, we use an empirical stability criteria for two adjacent planet pairs (three sequential planets).A set of three sequential planets with indicies i, j, and k is deemed unstable whenΔ_ij+Δ_jk < 18where Δ_ij and Δ_jk are the dynamical spacing of the inner and outer planet pair from the three sequential planets <cit.>. If there are only two simulated planets in a system, Δ_ij<10 results in the system being labeled unstable. Should any set of planets fail the above stability criteria, the system is deemed unstable and new planet periods are redrawn for allplanets as in step 5.New planetary radii are not redrawn, since passing the stability criteria is biased towards sets of planets with small planetary radii,where stability is more easily achieved.Redrawing planetary radii immediately would result in a simulated R_p distribution skewed towards small R_p, relative to the underlying distribution in Section <ref>.Should the stability criteria fail 10^3 times for the same set of planetary radii, new R_p and periods for allplanets are redrawn as in step 4. §.§ Orbital plane inclinations relative to the observer Once stability has been established for a model system, each planet is then assigned an inclination relative to the observer,the i variable commonly seen in transit and radial velocity detections. For Rayleigh distributed mutual inclinations (flared disk in Figure <ref>), i is assigned as follows.An inclination ϕ is drawn from a Rayleigh distribution with mode σ_ϕ /√(2), where σ_ϕ is the mode of the Rayleigh distributed mutual inclinations.The factor of 1/√(2) is a conversion factor between the Rayleigh distributed mutual inclinations,and the Rayleigh distributed planet inclinations around the invariable plane. The orbital plane of the planet is then rotated by a random uniform angle Ω, givingi_flare=+ϕ cos(Ω). For a flat disk (Figure <ref>), the perpendicular height above the invariable plane Z_0 is drawn from a Gaussian distribution with a mean of 0 and standard deviation σ_Z, in units of stellar radii.For a flat disk, unlike a flared disk, the assigned inclination i is dependent on the semi-major axis of the planet.Again, the orbital plane of the planet is rotated by a random uniform angle Ω, to account for a random viewing angle.i_flat=+ arcsin(Z_0/a) cos(Ω)resulting in a tendency for larger inclinations for close-in planets and vice-versa (right planel of Figure <ref>).§ ORBIT-NORMALISED TRANSIT DURATION RATIOFor a planet which transits through the centre of its star:2R_*≈ v_orbT_durwhere v_orb and T_dur represent the orbital velocity (assuming a circular orbit) and the transit duration of the planet respectively. Note that for the Kepler sample, the simplification of a circular orbit is justified since ξ is weakly dependent on eccentricity <cit.>. In addition, eccentricity values for the Kepler sample are generally found to be associated with near-circular orbits (e.g. <cit.>), or with mean values around ∼0.1 <cit.>.When the transit is not through the centre of the star (2R_* = 2√(R_*^2-b^2)):2√(R_*^2-b^2) = v_orbT_durwhere b is the impact parameter representing the transiting planet. For a pair of planets which transit the same host star:2√(R_*^2-b_in^2)=T_dur,in v_orb,in2√(R_*^2-b_out^2)=T_dur,out v_orb,outwhere the "in" and "out" subscripts represent the inner and outer planets respectively. From Kepler's 3rd law:v_orb,in∝a_in/P_in∝ P_in^-1/3Dividing <ref> by <ref>:√(R_*^2-b_in^2)/√(R_*^2-b_out^2) = T_dur,in/P_in^1/3/T_dur,out/P_out^1/3The RHS ratio is particularly useful for planetary transits as it is composed of well-measured variables. Setting the RHS to ξ <cit.>: ξ=T_dur,in/P_in^1/3/T_out,in/P_out^1/3From <ref>, a coplanar planetary pair will only give ξ=1 if the invariable plane (Fig <ref>) is exactly edge-on to the observer. For inclined invariable planes, a coplanar planetary pair will give ξ>1, as b_out > b_in. Values of ξ<1 are due to b_out<b_in, and are not possible in cases of perfect coplanarity.§ PREVIOUS COPLANARITY STUDIES Comparison of exoplanet coplanarity studies 2|c|Planet sample (lr)6-7 Reference Δϕ distribution Observables Dispersion^a Sample (quarter, multiplicity) Period (days) Radius (R_⊕) Stellar sample Dichotomy^g <cit.> Rayleigh ^ b σ_ϕ∼2.0^∘ Kepler (Q2, 1-6) 3-125 1.5-6 FGK dwarfs 2.8 <cit.> Fisherσ_ϕ^c <4.0^∘ RV & Kepler (Q2, 1-6) <200 < 22 FGK dwarfs - <cit.> Rayleighσ_ϕ^d ∼ 1.4^∘ HARPS & Kepler (Q2, 1-3) <50 >2 FGK dwarfs - <cit.> Rayleigh, R of R , ^e σ_ϕ^c∼1.4^∘ Kepler (Q6, 1-6) <200 1.5-30 FGK dwarfs 1 <cit.> uniform i + rotation^ fσ_ϕ<3.5^∘ Kepler (Q6, 1-3)<240 <22 FGK dwarfs 3 <cit.> Rayleighno fit Kepler (Q6, 1-6)<75^h - FGK dwarfs - <cit.> Rayleigh- Kepler (Q6, 1-6) <1.1 AU - - 2 <cit.> Rayleighσ_ϕ∼1.8^∘ Kepler (Q6, 1-6) <130^h - FGK dwarfs - <cit.> Rayleighσ_ϕ = 2.0^∘ +4.0_ -2.0 Kepler M-dwarfs (Q16, 1-5) 1-200 - M stars 3 <cit.> Rayleighσ_ϕ∼ 0^∘ Kepler M-dwarfs (Q16, 1-5) <180 1-4 M stars - This paper Rayleigh / Flat disk , 1.6^+0.6_-0.3 Kepler (Q16, 1-6) 3-200 1-5 FGK dwarfs - [] a The mode of the Rayleigh distribution of ϕ values (Fig. <ref>) around the invariable plane. bis the multiplicity vector for the numbers of observed k-planet systems, i.e. = [N̂_̂1̂, N̂_̂2̂, N̂_̂3̂,...]. c Converted from the mean μ of the mutual inclination Rayleigh distribution: σ_ϕ=√(2/π)σ_i. d Converted from Rayleigh distribution relative to the invariable plane:σ_ϕ=√(2) σ_Δθ. eis the normalized transit duration ratio (Appendix <ref>) as given in <cit.>. f Each planet is given a random uniform inclination between 0^∘-5^∘. This orbital plane is then rotated uniformally between 0-2π to give a random longitude of ascending node. g The factor by which the number of simulated 1-planet systems are lower than observed h Converted from a maximum semi-major axis, assuming a Solar mass star
http://arxiv.org/abs/1702.08126v1
{ "authors": [ "Timothy Bovaird", "Charles H. Lineweaver" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170227021621", "title": "A Flat Inner Disk Model as an Alternative to the Kepler Dichotomy in the Q1 to Q16 Planet Population" }
1,*]C. D. Munson 2]S. K. Choi 1]K. P. Coughlin 1]J. J. McMahon 3]K. H. Miller 2]L. A. Page 3]E. J. Wollack[1]Department of Physics, University of Michigan, 450 Church St., Ann Arbor, MI, 48109 [2]Department of Physics, Princeton University, Princeton, NJ, 08544 [3]NASA Goddard Space Flight Center, Greenbelt, MD, 20771[*]Corresponding author: cdmunson@umich.edu Infrared (IR) blocking filters are crucial for controlling the radiative loading on cryogenic systems and for optimizing the sensitivity of bolometric detectors in the far-IR. We present a new IR filter approachbased on a combination of patterned frequency selective structures on silicon and a thin (50 μm thick) absorptive composite based on powdered reststrahlen absorbing materials. For a 300 K blackbody, this combination reflects ∼50% of the incoming light and blocks 99.8% of the total power with negligible thermal gradients and excellent low frequency transmission. This allows for a reduction in the IR thermal loading to negligible levels in a single cold filter. These composite filters are fabricated on silicon substrates which provide excellent thermal transport laterally through the filter and ensure that the entire area of the absorptive filter stays near the bath temperature. A metamaterial antireflection coating cut into these substrates reduces in-band reflections to below 1%, and the in-band absorption of the powder mix is below 1% for signal bands below 750 GHz. This type of filter can be directly incorporated into silicon refractive optical elements.Composite Reflective/Absorptive IR-Blocking Filters Embedded in Metamaterial Antireflection Coated Silicon [==========================================================================================================§ INTRODUCTION* IR blocking filters are critical for optimizing bolometer-based receivers in the millimeter and submillimeter spectral region. In these bands, the IR power emitted from the telescope and surroundings (typically 250-300 K for ground and balloon-based instruments) is much brighter than the sky background. It is therefore crucial to control this out-of-band power. Filters serve to substantially reduce what would be a dominant source of noise and minimize the radiative loading on the cryogenic system. In addition, the filters must not significantly radiate, reflect, or scatter in the band of interest. This requires high thermal conductivity and good heat sinking for any filter containing absorptive components.*Meeting all these requirements in one system is difficult. A variety of approaches have been used previously. Reflective frequency selective surfaces (e.g., patterned onto thin plastic substrates and stacked, as pioneered by Ulrich <cit.>, and summarized by Ade <cit.>), bulk filters of absorptive materials <cit.>, and scattering filters <cit.> are commonly employed. These filtering approaches, however, are not without their complications. The reflective filters patterned on plastic substrates are subject to the intrinsic limits of multi-layered reflectors<cit.> as well as heating due to absorption of the plastic. In practice, additional reflective layers improve the filter rejection only incrementally and absorption leads to reradiated power that falls onto detectors<cit.>. Absorptive filters, such as those made of bulk Alumina, present difficulties with antireflection coatings, requiring the use of lossy materials that reduce the overall transmission in the desired pass-band by 5% or more <cit.>. Additionally, Alumina has reststrahlen bands that open up upon cooling, diminishing its overall filtering performance <cit.>. In this work we present an example of a hybrid approach based on a combination ofreflective frequency selective structures patterned on silicon substrates, scattering and absorptive layers based on composites of powdered crystals exhibiting the reststrahlen effect, and metamaterial antireflection coatings to control the in-band reflections.We present a particular example of the construction and performance of a blocking filter designed to pass the 70-170 GHz band in Section <ref>, and discuss its performance in Section <ref>. We conclude with a discussion of the scalability and applicability of this design in Section <ref>. § COMPOSITE FILTER CONSTRUCTION Figure <ref> shows the anatomy of the composite absorptive/reflective IR-blocking filter. This filter consists oflithographically definedfrequency selective surfaces patterned on two silicon wafers, a 25 μm layer of an absorptive mixture of epoxy and reststrahlen powders placed between the two patterned surfaces, and a metamaterial antireflection coating on both vacuum silicon interfaces. At IR wavelengths, light is scattered and reflected off the front silicon wafer and frequency selective surface. The front metamaterial surface scatters light both specularly and diffusely for frequencies above the single-moded limit of the structure. For the metamaterial surfaces described here (tuned to pass 1-2 mm wavelengths), this frequency falls well below the infrared emission from a 300 K blackbody, which peaks at 10 μm (30 THz). Infrared light that passes the frequency selective metal mesh is subject to both scattering and absorption by the reststrahlen-epoxy composite. An additional metal mesh layer reflects most of the remaining infrared light back into the epoxy-powder layer, boosting absorption and (to a lesser extent) reflection. This approach reduces the load on the cryogenic stage by reflecting a significant portion of the IR power, and uses an absorbing layer to further attenuate IR power passing the first reflective layer.At millimeter and submillimeter wavelengths the metal mesh frequency selective surfaces and metamaterial silicon have a high transmission, and the extinction length of the epoxy mixture is inconsequential, leading to low absorption. Thus in the bands of interest, this structure behaves nearly as if it were a slab of solid low loss silicon treated with a high quality antireflection coating <cit.>.In the remainder of this section we describe the design and performance of the frequency selective surfaces, powder-epoxy mixes, metamaterial antireflection coatings, and conclude with predictions for the integrated performance of our filters. §.§ Frequency Selective Metal Mesh Filters The first filtering component of these composite IR-blocking filters is a low-pass frequency selective surface formed by a mesh of resonant metal squares. These squares act as a grid of capacitive elements and pass low frequencies while reflecting high frequencies. In the low frequency limit, the metallization layer is effectively nonexistent giving nearly unity transmission, and in the high frequency limit, these features reflect according to the fill factor of the metallization. In the resonant region between, there is some additional reflection, with the cutoff frequency set by the grid spacing. We selected the grid parameters to place the cutoff frequency well above the upper end of our desired signal band (170 GHz), but below the peak emission frequency of a 300 K blackbody (18 THz). We selected a grid period of 23.8 μm and street widths of 5 μm, for a cutoff frequency in silicon of ∼3.6 THz (corresponding to a freespace wavenumber of 120 cm^-1 <cit.>, clearly visible as the first resonance in the measurement in Figure <ref>).These dimensions were additionally constrained to be within the capabilities of large-diameter liftoff lithographic techniques (limiting us to minimum features of few micron scales). A prototype of this design was thoroughly characterized (see Figure <ref>), and the performance is in good agreement with our theoretical expectations, with a total reflectivity of83% for a 300 K blackbody. These features are straightforward to tune to a desired cutoff frequency and can be fabricated on the silicon using standard lithographic techniques. Design of these features was carried out via the analytical techniques described by Ulrich <cit.>, with additional optimization and verification via modeling in ANSYS HFSS <cit.>. §.§ Reststrahlen Materials and Powder Filters* The reststrahlen effect, from the German for “residual rays," is the absorption of light at characteristic frequencies <cit.> of the bound ion pairs in crystalline materials that falls in the infrared <cit.>. These structural resonances prevent light in the reststrahlen band from propagating through the material, and effective low-pass filters have been previously demonstrated using powdered reststrahlen materials <cit.>. Transmissive and reflective filters have also been realized from bulk reststrahlen crystals <cit.>. *Powder filters, formed by reststrahlen powders mixed into a polyethylene carrier, were demonstrated as IR blocking filters by Yamada et al. <cit.> These filters, due to their plastic carrier, suffered from heating and reradiation. We fabricated similar powder filters by mixing reststrahlen powders into a mixture of toluene thinned Epotek 301 optical epoxy and applying thin layers with a commercial spraygun mounted on a robotic gantry. This allowed for the creation of thin ( 25 μm) uniform layers of the epoxy-powder composite. The powders were chosen from the assortment of materials characterized by Yamada <cit.> to provide good blocking coverage for wavelengths from 10 μm up to 150 μm (corresponding to 67-1000+ cm^-1, 2-30+ THz), overlapping transmissive regions of one powder with absorptive regions of another. Additionally, the powders were chosen for ease of use and acquisition, limiting us to non-hazardous materials (excluding materials such as thallium and beryllium salts, which have been previously used <cit.>) that are common chemical reagents. As a result of these constraints, our filters consist of magnesium oxide (MgO) and calcium carbonate (CaCO_3), both with 5-20 μm typical particle sizes. See Table <ref> for the composition of the reststrahlen composite layer, obtained from the Maxwell-Garnett theory <cit.>. The measured optical transmissions for these powder filters are shown in Figure <ref>. These transmission spectra exhibit the characteristic absorption features expected for both materials <cit.>. We combined these powders in equal parts (by mass) to toluene thinned optical epoxy and applied it with the spraygun. For the full composite filter, this epoxy layer also adhered the two silicon wafers together. The particle size is sufficiently small that the epoxy layer can be treated as a dielectric mixture that is well described by mean field effective medium approximations <cit.> in the instrument band of 70-170 GHz (see Figure <ref>).* The performance was characterized at cryogenic temperatures to ensure their proper cryogenic functioning. It is known that some reststrahlen materials have absorption bands that open up when the material is cooled down. In particular, alumina (Al_2 O_3) is known to have a section of its absorption band (between 30 and 300 μm) open up at temperatures of tens of Kelvin <cit.>. To explore whether this would be problematic with our materials, a powder filter consisting of our mixture of CaCO_3 and MgO was measured in a Fourier Transform Spectrometer (FTS) at a range of temperatures between 4 K and 300 K. Plots of the extremal temperatures are shown in the right panel of Figure <ref>. From this, it is apparent that changes in the absorption spectrum are minimal, and do not compromise the filtering performance of a 300 K blackbody significantly. §.§ Metamaterial Antireflection Coated SiliconHigh resistivity silicon is an excellent material for millimeter and submillimeter wave optics. The low loss (tan δ 7e-5) <cit.> and high refractive index allows for refractive optics with negligible loss. High purity, single-crystal silicon is available in large diameters and can be readily obtained. Additionally, silicon has a high thermal conductivity (>2kW / mK)<cit.>, which prevents filters with an absorptive component from heating up.The high refractive index of silicon presents the problem of high reflectivity for optics, but this problem has been successfully managed with a machined sub-wavelength anti-reflective surface on the outer surfaces of the optics. These features allow for the creation of antireflection “coatings" with simulated dielectric layers. These metamaterial antirreflection surfaces allow for high transmission optics (99% transmission across a 70-170 GHz band for a three-layer simulated dielectric coating <cit.>) fabricated entirely from silicon. A thorough discussion of this antireflection surface treatment approach is given in Datta et al <cit.>. Figure <ref> shows a photograph of a three-layer "coating" as well as a comparison of the simulated and measured reflection at 15^∘ incidence for two orthogonal polarizations.§ COMPOSITE FILTER PERFORMANCEThe spectral performance of our composite filtersand their constituent components was evaluated using FTS measurements. The thermal performance and integrated measurements were made in a cryostat open to a 300 K blackbody, using a carbon loaded disk bolometer. The performance of the individual components was measured across both the low-frequency signal band (down to 10 cm^-1, 300 GHz) limited by the low frequency capability of the FTS) and high-frequency blocking band (up to 5000 cm^-1, 150 THz). Additionally, the overall performance of a full composite was measured across the blocking band.Measurement Methods:The reflectance and transmittance in the range 10-5,000 cm^-1 (0.3-150 THz) were measured with the Bruker IFS 125 FTS using the following two combinations of source-beamsplitter-detector: Hg arc lamp—multilayer mylar—liquid helium cooled bolometer (30-700 cm^-1, 0.9-21 THz), and globar—Ge-coated KBr—DLATGS (500-5,000cm^-1, 150-1500 THz).The spectral response in the overlap region agreed to within 0.5%.The data sets were merged into one spectra by equating the areas underneath the curves in the overlap region using a weighted average.The reflectance was measured in a collimated beam geometry with an 8^∘° angle of incidence.The transmittance was measured in a focused beam (f/6.5) geometry at normal incidence.The total hemispherical transmittance in the range 500-5,000 cm^-1 was collected using a Bruker-made integrating sphere (75 mm diameter) accessory for the FTS with its own internal DLATGS detector.A diffusely reflecting gold surface, which matches the inner surface of the sphere, was placed over the sample port to collect the reference scan.For integrated testing, a cryostat was used to measure the radiative properties and total infrared blocking. IR Blocking Performance:The infrared blocking performance of these filters was measured on the FTS up to 5000 cm^-1 (2 μm wavelength, 150 THz), giving a full characterization of the transmission across the spectrum of a 300 K blackbody.These measurements reproduce the characteristic reststrahlen powder filter shape in thicker (75 μm) layers <cit.> (see Figure <ref>), and demonstrate excellent IR blocking in layers as thin as 25 μm. In the FTS measurements, the composite filter specularly reflected 40% of the light incident from a 300 K blackbody (indicating reflection off the front silicon surface and metal mesh features), and diffusely reflected another  10%, indicative of backscattering off the powder layer. The overall reflectance was lower than would be naively expected from the single reflector measurements in Section <ref>. This is likely explained by coupling of fields from the reflectors into the lossy epoxy composite below. Should higher reflectivity be needed, the reflective layers could be separated from the lossy composite and multiple reflective layers could be employed.Low Frequency Performance:The low frequency performance (below 1 THz) of a 75 μm layer of the powder filter component was measured down to 10 cm^-1 (300 GHz) using an FTS. These data were then fit with a simple transmission line model. The index of the powder layer was estimated via a Maxwell-Garnett effective medium approximation <cit.>, and treated as a layer of continuous material of an unknown thickness. The thickness of this layer and the underlying silicon layer were then fit using a standard least-squares approach. For a 75 μm mix layer (consisting of a binder of Epotek 301, ϵ_r = 3.7 + 0.1i as measured in the 1-10 THz range with a sample etalon formed by silicon wafers, loaded with powdered MgO, ϵ_r = 9.8, and CaCO_3, ϵ_r = 8.45 in 0.073 and 0.092 volumetric fractions respectively) on 500 μ m thick silicon (ϵ_r = 11.67), the model accurately reproduced the measured total thickness. This model was then used to extrapolate to lower frequencies and simulate the effect of adding a three-layer antireflective coating to a filter made using this material. This model shows that the reststrahlen powder filter introduces minimal loss (dominated by the epoxy carrier) in a signal band from 70-170 GHz, and that an instrument-band transmission of 99% should be achievable for a filter using this technology (with the total transmission limited by the antireflection coating performance). In the low frequency region, the filter performance is well represented with a simple transmission line model, taking into account the effective index of the composite determined from an effective medium approximation. The measured and modeled low frequency performance is shown in Figure <ref>. Thermal Performance and Cryostat testing:An integrated test of the composite filter performance was carried out in a 3-stage cryostat, to measure the total blocking efficiency of a 15 cm diameter prototype (Figure <ref>). A 5.8 cm diameter disk bolometer (made from carbon-blackened copper) was held at a base temperature of 5 K. The filter was mounted on the 20 K stage directly above the bolometer, and blocked light being emitted from a blackbody at 300 K visible through a hole in the lid of the 100 K stage. The total power blocked by the filter was then estimated by measuring the bolometer temperature with and without the filter. Additionally, the heating of the 20 K stage was measured to estimate the power absorbed by the filter, and the filter temperature was measured at its center and edge to characterize the thermal gradient across it. Figure <ref> shows a schematic of the cryostat. There was negligible heating of the center of the filter (with thermal gradients of less than 1 K from center to edge) when the filter was mounted to the 20 K stage and used to block the power from a 7 cm diameter window open to 300 K. The filter as a whole heated to 2 K above the stage temperature, demonstrating effective heat-sinking and power removal from the filter as well. In this configuration, the power deposited on a carbon disk bolometer at 5 K was measured, and this measurement established a lower limit on the blocking of 98% of a 300 K blackbody. This lower limit is in agreement with the FTS measurements of the full composite filter. Comparison to Competing Filter Technologies:Compared to existing reflective filters, built of reflective frequency selective surfaces patterned onto plastic substrates and stacked, our filters offer several distinct advantages. The high thermal conductivity of silicon and low absorption means that there is negligible heating of the filter center relative to the heatsunk edge. Additionally, silicon is a stiffer, more mechanically robust substrate, which allows for better control of lithographic features and finished filters that are stronger and less susceptible to deformation and wrinkling due to thermal cycling. The composite construction of our filters additionally integrates absorptive components to improve the overall out-of-band rejection beyond what can be attained with a reasonable number of stacked reflective components.Compared to bulk absorptive filters such as alumina, our filters offer the ability to reflectively reject a significant portion of the power, reducing the load on the cryogenic system. The reststrahlen materials we used are also easy to obtain. The use of multiple reststrahlen materials allows for selection of complementary sets of materials that improves the frequency coverage and prevents sections of the band from becoming transmissive (such as happens with alumina when cooled). Silicon is a lower loss material than alumina and the antireflection coatings, being metamaterial silicon, are significantly lower loss than the epoxies used to antireflection coat bulk alumina filters, and the thermal conductivity of silicon is significantly higher than alumina or PTFE. Finally, the composite filter offers a number of adjustable parameters not present in bulk filters that allow better tailoring of the filter characteristics to achieve the desired performance. § FREQUENCY RANGE OF APPLICABILITYThe components of this style of silicon substrate composite filters all have applicability across a broad range of frequencies. The limiting frequencies for the different components vary, but filters can be constructed using these techniques and some subset of the components for frequencies ranging from tens of gigahertz to hundreds of terahertz depending on the components used. The frequency scaling of each of the four important components is discussed in this section. The combination of several of these components can form effective filters across a wide range of frequencies. §.§ Frequency Selective Metal FeaturesThe frequency selective metal mesh features can be fabricated across the full range of sizes available with modern lithographic techniques. This results in applicability across the full range over which silicon is transparent, up to the beginning of the absorption band at 1 μm wavelength (300 THz). Silicon offers better control and repeatability of lithographic features than plastic substrates, allowing smaller features (and therefore higher frequencies) to be attained. Additionally, the metal features can be formed with both high- and low-pass frequency responses, as well as with anisotropy, which enable a range of filtering characteristics, including band-defining filters, and filters with polarization dependence. §.§ Silicon SubstrateThe high-resistivity silicon substrate offers excellent transmission performance from low frequencies up to the beginning of its absorption band in the near-infrared. It begins to become absorptive at a wavelength of approximately 1 μm, which corresponds to the plasma frequency (300 THz) in the media<cit.>. At frequencies lower than this (longer wavelengths), it remains highly transmissive and low loss, with typical loss tangent below 0.0002 across the THz region <cit.>, and remaining low through the IR (with the exception of two lattice absorption features at 600 and 650 cm^-1, corresponding to 18 and 19 THz or wavelengths of 15 and 16 μm)<cit.> . §.§ Antireflective CoatingsThe metamaterial antireflective coatings have applicability to a similarly broad range of frequencies. At lower frequencies, the coatings can be machined into the surfaces using a dicing saw or conventional grinding techniques. This approach to the antireflection coatings has been successfully demonstrated for frequencies up to 1.5 THz with excellent performance.With increasing frequency, higher precision fabrication approaches become necessary. Using laser machining to cut smaller features is one option, up to the limits of current laser machining capabilities. Additionally, lithographic techniques involving the patterning of the surface and etching of features, such as deep reactive ion etching (DRIE) offer a solution to still higher frequencies, with attainable feature sizes of tens of nanometers that would be suitable for antireflection coatings well above the 1 μm absorption cutoff of silicon. For higher frequency and non-cryogenic applications, conventional antireflection coating approaches can also be used, substituting layers of bulk dielectric materials (or applied thin films of dielectric materials) for the simulated dielectric formed by the metamaterial silicon.§.§ Scattering and Absorptive PowderThe scattering and absorptive powder layer is useful only for lower frequencies due to the fixed reststrahlen bands and limits on attainable powder size. The particulate size of the powders can be selected to move the scattering peak to higher or lower frequencies as needed, but the absorption bands of the materials are fixed by the material choice. For some applications, suitable materials are available that will enhance the overall light rejection of the filter. For filters where blocking in these bands is not desired, the scattering and absorptive powder can be removed from the design. The epoxy binder can likewise be removed in favor of direct bonding the wafers, for applications where the epoxy would increase the loss and is not needed as a carrier for scattering and absorptive powders. § POTENTIAL APPLICATIONSIn addition to forming effective free-space IR blocking filters, this filtering approach offers several novel possibilities for silicon-substrate optical elements. Lower frequency-selective metal elements can be incorporated into these filters to aid in defining the instrument signal band. Anisotropic application of these filtering techniques can form birefringent materials. These filters can be easily and inexpensively integrated into other optical components, such as silicon lenses. Filters with higher cutoff frequencies and better uniformity can also be constructed for effectively blocking higher frequencies (into the mid-IR and higher) due to the high quality of silicon substrates available (superior material properties and surface finish allow for finer lithography, leading to better high frequency performance, a current limit of plastic-substrate filters <cit.>).The frequency range can be tuned by adjusting the reflective grid parameters, the scattering particle size, and the specific material used for scatterers, and multi-layer reflective structures can be formed to increase the overall reflectivity. Direct wafer bonding can reduce or eliminate the need for lossy epoxy components, and reflective layers can be precisely spaced using standard lithographic techniques to set the layer thicknesses. Collectively, these techniques will further improve control of the filter properties, enabling higher performance and better customization of the filter characteristics to the desired application.Finally, more complex metamaterial behaviors can be added through more complicated lithographic features and machined subwavelength features. This enables a wide array of novel optical characteristics with potentially broad consequences for future imaging systems.§ ACKNOWLEDGMENTS This work was supported by a NASA Office of the Chief Technologist’s Space Technology Research Fellowship # NNX12AM32H. Lithography was performed using the Lurie Nanofabrication Facility at the University of Michigan. JJM was supported by DE-SC0015799 for this work. 10 Ulrich:filters R. Ulrich, Effective low-pass filters for far infrared frequencies, Infrared Physics(1967).Ulrich:grids R. Ulrich, Far-infrared properties of metallic mesh and its complementary structure, Infrared Physics(1967).ade:meshfilters P. A. R. Ade, G. Pisano, C. Tucker, and S. Weaver, A review of metal mesh filters, Proc. SPIE 6275, 62750U–62750U–15 (2006).Inoue:14 Y. Inoue, T. Matsumura, M. Hazumi, A. T. Lee, T. Okamura, A. Suzuki, T. Tomaru, and H. Yamaguchi, Cryogenic infrared filter made of alumina for use at millimeter wavelength, Appl. Opt. 53, 1727–1733 (2014).Bock95 J. J. Bock and A. E. Lange, Performance of a low-pass filter for far-infrared wavelengths, Appl. Opt. 34, 7254–7257 (1995).Manley:powder T. R. Manley and D. A. Williams, Scattering filters in the far infrared, Spectrochimica Acta(1965).multilayer-reflect J. Shao and J. A. Dobrowolski, Multilayer interference filters for the far-infrared and submillimeter regions, Appl. Opt.(1965).stierwalt D. L. Stierwalt, Low temperature transmittance of materials for the infrared, Proc. SPIE(1975).Datta:13 R. Datta, C. D. Munson, M. D. Niemack, J. J. McMahon, J. Britton, E. J. Wollack, J. Beall, M. J. Devlin, J. Fowler, P. Gallardo, J. Hubmayr, K. Irwin, L. Newburgh, J. P. Nibarger, L. Page, M. A. Quijada, B. L. Schmitt, S. T. Staggs, R. Thornton, and L. Zhang, Large-aperture wide-bandwidth antireflection-coated silicon lenses for millimeter wavelengths, Appl. Opt. 52, 8747–8758 (2013).Ansys:HFSS ANSYS, Hfss website: http://www.ansys.com/products/electronics/ansys-hfss, .Fermi:mcqs E. Fermi, Molecules, Crystals, and Quantum Statistics (W. A. Benjamin, Inc., 1966).YAMADA:62 Y. Yamada, A. Mitsuishi, and H. Yoshinaga, Transmission filters in the far-infrared region, J. Opt. Soc. Am. 52, 17–17 (1962).Robinson:FIR L. C. Robinson, Physical Principles of Far-Infrared Radiation, vol. 10 of Methods of Experimental Physics (Academic Press, 1973).EffectiveMedium G. A. Niklasson, C. G. Granqvist, and O. Hunderi, Effective medium models for the optical properties of inhomogeneous materials, Applied Optics 20 (1981).Dobrov:sapphire E. R. Dobrovisnkaya, L. A. Lytvynov, and V. Pishchik, Sapphire: Material, Manufacturing, Applications (Springer Science + Business Media, 2009).Hadni:65 A. Hadni, J. Claudel, X. Gerbaux, G. Morlot, and J.-M. Munier, Sur le comportement différent des cristaux et des verres dans l'absorption de l'infrarouge lointain (40–1500μ) à la température de l'hélium liquide, Appl. Opt. 4, 487–494 (1965).glassbrenner:si C. J. Glassbrenner and G. A. Slack, Thermal conductivity of silicon and germanium from 3^∘k to the melting point, Phys. Rev.(1964).Datta:ACTPol R. Datta, J. Austermann, J. Beall, D. Becker, K. Coughlin, S. Duff, P. Gallardo, E. Grace, M. Hasselfield, S. Henderson, G. Hilton, S. Ho, J. Hubmayr, B. Koopman, J. Lanen, D. Li, J. McMahon, C. Munson, F. Nati, M. Niemack, L. Page, C. Pappas, M. Salatino, B. Schmitt, A. Schillaci, S. Simon, S. Staggs, J. Stevens, E. Vavagiakis, J. Ward, and E. Wollack, Design and deployment of a multichroic polarimeter array on the atacama cosmology telescope, Journal of Low Temperature Physics(2015).Choy:effmed T. C. Choy, Effective Medium Theory (Oxford: Clarendon Press, 1999).palik-silicon D. F. Edwards, Handbook of Optical Constants of Solids (Eslevier, 1997).thzsilicon P. H. Bolivar et al., Measurement of the dielectric constant and loss tangent of high dielectric-constant materials at terahertz frequencies, IEEE Transactions on Microwave Theory and Techniques(2003).mmsilicon M. N. Afsar and X. Li, Millimeter wave complex refractive index, complex dielectric permittivity and loss tangent of high purity and compensated silicon, International Journal of Infrared and Millimeter Waves (1994).
http://arxiv.org/abs/1702.08454v1
{ "authors": [ "C. D. Munson", "S. K. Choi", "K. P. Coughlin", "J. J. McMahon", "K. H. Miller", "L. A. Page", "E. J. Wollack" ], "categories": [ "astro-ph.IM" ], "primary_category": "astro-ph.IM", "published": "20170227163616", "title": "Composite Reflective/Absorptive IR-Blocking Filters Embedded in Metamaterial Antireflection Coated Silicon" }
[Seeing What Is Not There:Learning Context to Determine Where Objects Are Missing Jin Sun David W. JacobsDepartment of Computer ScienceUniversity of Maryland{jinsun,djacobs}@cs.umd.eduDecember 30, 2023 ===================================================================================================================== < g r a p h i c s >figureWhen curb ramps (green rectangle) are missing from a segment of sidewalks in an intersection (orange rectangle), people with mobility impairments are unable to cross the street. We propose an approach to determine where objects are missing by learning a context model so that it can be combined with object detection results.] Most of computer vision focuses on what is in an image. We propose to train a standalone object-centric context representation to perform the opposite task: seeing what is not there. Given an image, our context model can predict where objects should exist, even when no object instances are present. Combined with object detection results, we can perform a novel vision task: finding where objects are missing in an image. Our model is based on a convolutional neural network structure. With a specially designed training strategy, the model learns to ignore objects and focus on context only. It is fully convolutional thus highly efficient. Experiments show the effectiveness of the proposed approach in one important accessibility task: finding city street regions where curb ramps are missing, which could help millions of people with mobility disabilities. IntroductionMost fundamental computer vision tasks, e.g., image classification and object detection, focus on seeing what is there: for example, is there a curb ramp in this image, if yes, where is it? With the help of deep neural network models, computational approaches to such tasks are catching up to human performance in more and more benchmarks. However, humans can easily outperform algorithms in the task of inferring objects that are `not there': for example, is there a curb ramp in this image, if no, where could it be?We are interested in finding where objects are missing in an image: an object of interest is not there, even though the environment suggests it should be. From a computational perspective, an object can be defined as missing in an image region when: 1) an object detector finds nothing; 2) a predictor of the object's typical environment, i.e. context, indicates high probability of its existence. Given an image, we want to detect all such regions efficiently. We summarize the relationship between an object detector and its context model in Table <ref>. While there are many existing works on utilizing context in object detection (Section <ref>), they mainly focus on improving performance on finding typical objects with contextual and object information entangled. In this work we propose to train a standalone object-centric context representation to find missing objects. By looking at the reverse conditions, the exact same method can be adapted to find out of context objects too. One practical motivation for finding missing objects comes from the street view curb ramp detection problem (Figure <ref>). The task is to label curb ramps in a city's intersections so that people with mobility impairments can plan their route with confidence. Although existing work <cit.> shows good performance in detecting constructed curb ramps, it cannot detect missing curb ramps. Knowing this information is highly valuable: people with disabilities can assess the accessibility of an area; navigation algorithms can calculate better routes for pedestrians; the government can plan for future renovations accordingly.This is a very expensive and time consuming task for human labelers, which is partially the reason why such information is missing from public databases. Therefore, we are interested in developing an automatic algorithm that is effective and efficient. It can be used to scan a whole city for finding regions where curb ramps are missing. In this scenario, the number of found true missing curb ramp regions (recall) is more important than precision because it is much more light-weight to ask humans to verify algorithm results than to label images from scratch. Moreover, even if the algorithm reports one true missing curb ramp region but mistakenly ignores three others in an image, it is still valuable as a preprocessing step. With the missing curb ramp regions data, government can prioritize intersections in a city to send physical auditors in a more efficient way.We believe the key to tackle this problem is to learn a model that focuses on context only and works efficiently just like an object detector: it scans each image and generates a probability heat map in which each pixel represents the probability that an object exists, even when no object is in sight. One big advantage of context and object decomposition is that we don't need abnormal object labels (missing/out-of-context) for training. A standalone context model can be learned from typical objects and later used for finding abnormal objects. This greatly simplifies training: normal objects are abundant and much easier to collect and label than abnormal objects. In this paper, we propose such a model based on convolutional neural networks and a novel training strategy to learn a standalone context representation of a target object.We start by introducing a base network in Section <ref>. It takes input images with explicit object masks and learns useful context from the remaining areas of the images. Because of the limitations discussed in Section <ref>, we then propose a fully convolutional version of the network that learns an implicit object mask such that it ignores objects in an image and focuses purely on context. It does not require object masks during test time. Finally, Section <ref> describes the procedure of using the context model to find missing objects regions. The contributions of this work are as follows. First, we propose a method to learn an object-centric context representation by learning from object instances with masks. Second, we propose a training strategy to force the network to ignore objects and learn an implicit mask. The model is fully convolutional so it also speeds up probability heat map generation significantly. Finally we present promising results on missing curb ramps detection problem in street view images, and a preliminary result on finding out-of-context faces.Related Work Context in Object Recognition. Cognitive science studies have shown a large body of evidence that contextual information affects human visual search and recognition of objects <cit.>. In computer vision, recently it also has become a well accepted idea that context helps in object recognition algorithms <cit.>. Usually, context is represented as the semantic labels around an object. <cit.> uses a Conditional Random Field to model contextual relations between objects' semantic labels to post-process object recognition results. <cit.> builds a deformable part model that incorporates context labels around an object as `parts'. Because of the coupling between context and object information, these methods are unsuitable to detect missing object regions. Torralba et al. proposed the Context Challenge <cit.> that consists in detecting an object using exclusively contextual information. They take the approach of learning the relation between global scene statistical features and object scale and position. Visual Memex <cit.> is a model that can either retrieve exemplar object instances or predict semantic identity of a hidden region in an image. It uses hand-crafted features and models context as inter-category relations. Our approach can be seen as a general approach that attempts to address this challenge, without the need for designing hand-crafted features or preset object classes.Finding Missing Objects. Grabner et al. proposed to use the General Hough Transform to find objects that are missing in some frames during object tracking <cit.>. The idea is to estimate the position of a target object from surrounding objects with coupled motions.Computer Vision with Masked Images. Recently Pathak et al. <cit.> proposed to learn a convolutional neural network context encoder for image inpainting. Both their work and ours train convolutional neural networks with masked images. But the purpose is very different as they try to learn a generative model to inpaint the mask while we learn a discriminative model to infer what is inside the mask. Also, our work is capable of using a much more efficient fully convolutional structure. Accessibility Task. With massive online resources such as the Google Street View service, many computer algorithms are designed to help people with disabilities and improve their quality of life. CrossingGuard <cit.> is a system designed to help visually impaired pedestrians to navigate across intersections with help from Amazon Mechanical Turk. Tohme <cit.> is a semi-automated system that combines crowdsourcing and computer vision to collect existing curb ramp positions in city intersections using GSV images. It uses the Deformable Part Models <cit.> as a curb ramp detector and asks Mechanical Turkers to verify the results. They provide a street view curb ramp dataset with 1086 city intersection images, which we use in our experiment. Learning Context from Explicit Object Masks In this section, we introduce the base version of the proposed context learning algorithm. If `context' is considered to be anything that surrounds an object except for the object itself, this model is learning context literally: any target object instances in training images are masked out. Here we assume an object's visual extent is fully represented by its bounding box.We train this context model in a binary image classification setting. Positive samples are collected so that each image has an object at its center, with a black mask (value equals zero after preprocessing) covering the object's full extent. The ratio between an object's bounding box width and the whole image width is about 4.0 with the purpose of including a large contextual area. Negative samples are random crops with a similar black mask at center. The position of the negative crops is chosen so that the masked region will not cover any groundtruth labeled objects with more than a Jaccard index [Defined as the intersection-over-union ratio of two rectangles.] of 0.2.When there are multiple object instances in the image, we only mask out one object at a time for positive samples. This is because the existence of other object instances could be useful context: for example, curb ramps often appear in pairs.To prevent the model trivially learning the particular mask dimension, we force the negative samples to have a similar distribution of mask dimensions as the positive samples. The sampling strategy is to interleave the positive samples and negative samples, and use the previous positive sample's mask dimension in the next negative sample.We train a convolutional neural network model Q. The network consists of four convolutional layers with pooling and dropout, and two fully connected layers. Its structure is summarized in Table <ref>. Cross entropy loss (Eq. <ref>) is used as the classification loss: ℒ_c = -Q_y(I_m) + log∑_y e^Q_y(I_m),where y ∈{1,2} is the groundtruth label for a masked image I_m (1 for positive, 2 for negative), Q(I_m) is a 2x1 vector representing the output from the network Q, while Q_y(I_m) represents its y-th element.During test time, a sliding window approach is used to generate the probability heat map for a new image so that each pixel has a context score of how likely it is to contain an object. At each position, a fixed size (224x224 in our implementation) image patch is cropped with the center region masked out to be fed into the base network. The size of the center mask region is chosen based on the statistics of object bounding boxes from the training set.A Fully Convolutional Model that Learns Implicit MasksThere are several issues with a network trained with masked images. First, the network tends to learn artifacts. For example, <cit.> reports that training with rectangular mask makes the network learn “low level image features that latch onto the boundary of the mask”. They propose to use random mask shapes to prevent this issue. However, we cannot use the same strategy for this task because our mask is strictly tied into the visual extent of an object. Second, during testing time, the network is expecting to see every input with an explicit mask. The efficiency of this operation becomes an issue when we have to evaluate the network at all possible positions and scales to generate a heat map. There are standard procedures to convert a convolutional neural network with fully connected layers into a fully convolutional one <cit.>, so the evaluation is much efficient for images of arbitrary size. However, in our case the situation is complicated. During training, the network always sees input images with all zeros at the center, so the weights of neurons with receptive field on this region can be arbitrary because no gradients are updated. If we apply the converted fully convolutional network to unmasked images, outputs from those neurons can affect the network's output arbitrarily.The question is then, can we train a network so that it is fully convolutional and learns context by ignoring the masked region `by heart'?The answer is yes and we now propose a training strategy to make a network learn the implicit mask. The intuition is that we want the network to output similar results regardless of whether the image is masked or not. By enforcing this objective, the network should learn to find visual features that are shared in both masked and raw images: i.e. from the unmasked regions.Formally, we want to minimize a distance loss in addition to the classification loss used in the base network:ℒ_d = || Q(I_m) - Q(I) ||_p,where Q(I_m) is the output vector from the network Q with masked image I_m as input, Q(I) is the output vector from Q with the unmasked raw image I as input, and ·_p represents the L_p-norm. Effectively, we have two shared-weight networks that are fed with masked and raw image pairs (Figure <ref>). The network is a fully convolutional version of the base network (Table <ref>). One stream of the network computation takes the masked image as input and outputs Q(I_m). In parallel, the other stream of network computation takes the unmasked raw image as input and outputs Q(I). The classification loss ℒ_c is calculated based on Q(I_m) alone, while the distance loss ℒ_d is calculated by Q(I_m) and Q(I). This structure is known as a Siamese Network <cit.> so we call this network as the Siamese trained Fully convolutional Context (SFC) network. Following <cit.>, we choose the L_1 norm in distance loss ℒ_d. We expect the SFC network to learn an implicit object mask by assigning zero weights to neurons whose receptive field falls onto the center object mask region of an input image. During test time, unlike the base network, we don't have to manually set the mask size: the SFC network has encoded this information in convolutional filters' weights.Finally, the overall training objective is defined as a weighted sum of the two losses: ℒ = λℒ_d + ℒ_c,where λ=0.5 in our implementation. The benefits of this training strategy are three fold:1) Because the SFC learns to ignore the object mask region, we can directly apply it to new unmasked images with arbitrary size, so it is highly efficient to generate a dense probability map. Figure <ref> shows a comparison between heat maps generated by the base network and the SFC network. An image with size 1024x2048 takes about 5 minutes to generate a heat map with the base network while the SFC network takes less than 4 seconds to generate a map with higher spatial resolution.2) The SFC network is less prone to artifacts. It is possible for the base network to learn artifact features along the boundary of masks. Because such features are not present in unmasked images, the SFC network ignores them to reduce training distance loss ℒ_d.3) During training, we can perform hard negative mining efficiently. Between each training epoch, we can apply the SFC network on all training images to generate heat maps and find high score false positive regions. Because of the efficiency of fully convolutional networks, this step can be easily included in training. Section <ref> shows that hard negative mining indeed improves the network performance by a large margin.Finding Missing Object Regions Pipeline With a trained standalone context network (base network or SFC network), we summarize the procedure for finding missing object regions in the following steps. 1) Generate a context heat map using the context network Q for a test image. The context heat map shows where an object should occur in the image.2) Generate object detection results using an object detector.Convert object detection bounding boxes into a binary map by assigning 0 to the detected box region, 1 otherwise. This binary map shows where objects have already been found in the image. We want to find the regions where no objects are found.3) Take an element-wise AND operation between context heatmap and the binary map. The resulting map shows the places in which an object should occur according to context but in which the detector has found none.4) Retrieve the high scored regions (above a preset threshold) according to the resulting map, crop them from the original image. These are the regions where objects are missing. ExperimentsIn this section, we first examine the characteristics of the base network and the SFC network in Subsection <ref>. Then we evaluate their effectiveness. With the decomposition of context and object information, we study two unique tasks that can be efficiently performed with a standalone context model. Subsection <ref> shows experimental results of detecting missing curb ramp regions in street view images. Subsection <ref> shows preliminary results of detecting faces that are out of context in unconstrained images. Characteristics of the Trained ModelAs a validation study, we first check the sensitivity of the base and SFC networks with regard to small changes in the input image. All experiments are conducted on the curb ramp street view dataset. The desirable model should have small response variation to the center region of the input image, where the mask was put during training. For each image, we change one pixel value at a time, by adding a small noise. The L_2 distance between a network's output before and after the disturbance is recorded for each pixel. In the end we obtain a map that shows which region in the image has large impact on the network's output. This can be seen as an estimate of the first order derivative of the network with respect to its input. Figure <ref> shows the result of this experiment with comparison between the base network and SFC network. The result is summed over 20 different image samples.From the result it is clear that the SFC network has small sensitivity at the center region of the input image. This is most likely due to the network learning to mute neurons whose receptive field falls at the center region of the input image. On the other hand, the base network shows no such preference. The blank region in the SFC's sensitivity map can be seen as a visualization of an approximation to the learned implicit region mask. Next we check the distance loss ℒ_d of the base network and the SFC network on test data. Following the same set of training hyper-parameters and setup (learning rate, training epochs) to train the two networks, the mean ℒ_d loss is summarized in Table <ref>. It is clear that the SFC network is much more consistent in producing similar outputs regardless of the object masks.The above experiments have demonstrated that the SFC network works just as we have expected: 1) it learns an implicit mask so it is less sensitive to any changes in the center region; 2) the useful features that it learns for the classification task are mainly from the unmasked regions.Finding Missing Curb Ramp Regions Setup.We want to find missing curb ramps in the street view curb ramps dataset <cit.>. The dataset contains 1086 Google Street View panoramas which come from four cities in North America: Washington DC, Baltimore, Los Angeles and Saskatoon (Canada). Each panorama image has 1024x2048 pixels. It provides bounding box labels for existing curb ramps. On average there are four curb ramps per image. The dataset also contains bounding box labels for missing curb ramps regions.The dataset is split into half training and half testing. Each image is converted to YUV color space and normalized to be zero mean and one standard deviation in all channels. We use the curb ramp detector provided with the dataset, a Deformable Part Model, with default settings. Training. For each epoch, 5000 samples are generated from training data, with half positives and half negatives. Figure <ref> shows several examples. Each sample has 50% probability of being horizontally flipped for data augmentation purposes. Positive samples contain useful contextual information around the curb ramps. Negative samples are sampled randomly from the remaining areas of the panoramas. To train the SFC network, each sample is prepared with two versions: raw and masked. We resize positive samples such that the object width is close to 55 pixels in a 224 pixels wide image. Each negative sample uses the same object mask and scale as the last positive sample to prevent the network overfitting to the mask dimension distribution.We use the Keras/Tensorflow software package <cit.> to train the network models. The optimization algorithm uses Adadelta with default parameters. Since this is an adaptive learning rate method, there is no need to set a learning rate schedule during training. 20% of the training data is used as a validation set for an early stopping test. We have trained a base network and a SFC network using the same hyper-parameters and training setup.Results. Following the procedure described in Section <ref>, we run the two networks on test images to generate probability heat maps of where curb ramps should be in the image. Each heat map for the base network is generated in a sliding window scheme with a stride of 10 pixels, and various object mask widths of {50, 70, 100} to generate multi-scale maps. The SFC network doesn't need an object mask size, so we resize the input panorama image with scales {0.5, 0.7, 1.0}. The numbers are chosen so that two networks see similar image pyramids. We use the DPM detector provided with the dataset to generate detection results. For each panorama, we generate a final map that combines detection and context map and retrieve the high scored regions (above certain threshold) with size d× d from the raw image. According to preliminary empirical studies, we set the context threshold to 0.4 throughout the experiment.We use human verification to evaluate the quality of the reported missing curb ramp regions. For that purpose, we develop a web based interface (Figure <ref>) that displays a gallery of found regions, ranked by their context scores. For each candidate region, the user provides feedback on whether it is truly a region with missing curb ramps. We compare context maps generated by the base and the SFC networks with three baseline methods: random scores, spatial prior map, and a Faster RCNN <cit.> based missing curb ramp detector. Random scores assigns uniformly random context scores from [0,1] to all positions in an image. This is a reference baseline showing the performance by chance.A spatial prior map is built using the prior positions of curb ramps in street view panoramas. We use the prior map as a replacement for the context map for comparison. We collect the prior spatial distribution of all curb ramps from the training images. The collected distribution is smoothed with a 30x30 pixel Gaussian kernel with sigma=10. Figure <ref> shows the spatial prior map used in our experiment. Because most panoramas are at street intersections, there is strong spatial structure consistency among the dataset. We expect this approach to be a reasonable baseline. With missing curb ramp regions labels, we can treat this task as a standard object detection problem and directly train a Faster RCNN detector: the positive `object' is a region labeled as missing curb ramps. Note that a Faster RCNN detector is capable of learning context because it's an end-to-end approach: potentially the detector can learn from the whole image to predict locations of missing curb ramp regions. We expect the Faster RCNN detector to be a strong baseline.The verification of the missing curb ramp regions requires domain knowledge. We asked one researcher who has extensive experience with accessibility problems to verify the results using our web interface. Figure <ref> shows the comparison in recall of true missing curb ramp regions versus number of visited regions (Recall@K). The retrieved region size is set to d=400 pixels. 500 regions were retrieved from 543 test images. The result shows that the SFC network with hard negative mining outperforms all other methods. We believe its superiority comes from the highly efficient fully convolutional structure that helps in training and generating high resolution context maps. Spatial prior map shows reasonable performance, which confirms the spatial bias of curb ramps locations in the dataset. Unlike the spatial prior map, the proposed methods can work well on other datasets that have no such bias. The Faster RCNN detector has significantly less recall compared with the SFC networks. With more missing curb ramp regions as training data, we expect the Faster RCNN detector to show improved performance; on the other hand, the SFC network does not even need missing curb ramp labels in training. The proposed methods learns useful context information from normal curb ramps, which are much easier to collect and label than missing curb ramp regions. Moreover, the SFC network is using detection results from a less advanced curb ramp detector (a DPM model shipped with the dataset): 77% of the false missing curb ramp retrievals are due to inaccurate curb ramp detections. Due to the page limit, we show more qualitative results of retrieved regions by these methods in the supplementary document. Additionally, we investigate the effects of the retrieved region size d on number of true missing curb ramp regions. Specifically, we vary the cropped region size from 400 pixels in width to 100 pixels. With smaller region size, it becomes crucial that the region is accurately localized with missing curb ramps at the center. Table <ref> shows that the SFC network is not affected too much by the reduced field of view. This is because the regions it found are very well localized (See Figure <ref>). On the other hand, two baseline methods (random scores and prior maps) are performing poorly when the region size becomes small. Note that smaller windows can lead to ambiguities, which can result in `falsely admitted' missing curb ramps due to human verification error. This is reflected in Table <ref> first row: from region width 400 to 200, the SFC performance goes up. Discussion. Among the 543 street view intersections in the test set, we are able to find as many as 27% missing curb ramp regions using the proposed method by merely looking at 500 regions. This is an impressive result for the following reasons. 1) The whole process is very efficient (Table <ref>) such that it can be easily deployed to scan new city areas. For example, there are about 2,820 intersections in Manhattan, New York: it will take just a few hours for our system to find missing curb ramps in a region with 1.6 million population. 2) Research has shown that curb ramps condition (missing or not) shows high proximity consistency: if one intersection has missing curb ramps, it is highly likely that the intersection nearby has similar issue. Results from our system can be used as an initial probe to quickly find city areas that need special attention. Finding Out of Context FacesThe pipeline in Section <ref> for finding missing objects can be adapted to find out of context objects with just a few small modifications: change step 2 by assigning 1 to detected box regions and 0 for other regions; change step 4 to retrieve the lowest scored regions. Here we show a simple preliminary result of finding out of context faces to demonstrate both the generalization ability of the proposed method on different domains and the possible future directions. The task is to find out of context faces in the Wider face dataset <cit.>. Using a similar procedure as in finding missing objects and a state-of-the-art face detector <cit.>, we retrieve the top 500 face regions from the validation set that contain high face detector score and low context score. For evaluation purposes, we define an out of context face as a face without visible support from a body. Figure <ref> shows qualitative results of found out of context faces using the SFC network. We compare the SFC network result with random scoring. Out of 500 regions, the SFC network can find 27 out of context faces while random scoring found 14. While the result is preliminary, it suggests that the proposed method has the potential to be used in many other applications where finding out of context objects is important: for example, visual anomaly detection. ConclusionWe present a approach to learn a standalone context representation to find missing objects in an image. Our model is based on a convolutional neural network structure and we propose ways to learn implicit masks so that the network ignores objects and focuses on context only. Experiments show that the proposed approach works effectively and efficiently on finding missing curb ramp regions.§ ACKNOWLEDGMENTS This work was supported by an NSF grant (IIS-1302338). ieee
http://arxiv.org/abs/1702.07971v1
{ "authors": [ "Jin Sun", "David W. Jacobs" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170226015638", "title": "Seeing What Is Not There: Learning Context to Determine Where Objects Are Missing" }
Tars: Timeliness-aware Adaptive Replica Selection for Key-Value Stores Wanchun Jiang, Liyuan Fang, Haiming Xie, Xiangqian Zhou, Jianxin WangSchool of Information Science and Engineering, Central South University, Changsha, Hunan, China 410083Email: jiangwc@csu.edu.cn================================================================================================================================================================================================================Fixed-point optimization of deep neural networks plays an important role in hardware based design and low-power implementations. Many deep neural networks show fairly good performance even with 2- or 3-bit precision when quantized weights are fine-tuned by retraining. We propose an improved fixed-point optimization algorithm that estimates the quantization step size dynamically during the retraining. In addition, a gradual quantization scheme is also tested, which sequentially applies fixed-point optimizations from high- to low-precision. The experiments are conducted for feed-forward deep neural networks (FFDNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs).deep neural networks, recurrent neural networks, fixed-point quantization, step size adaptation § INTRODUCTION Deep neural networks (DNNs) show very high performance in various fields such as speech recognition <cit.> and image classification <cit.>. However, real-time implementation of DNNs usually demands many arithmetic and weight fetch operations. Thus, word-length optimization is needed in embedded applications to reduce the strength of arithmetic and the size of the weight storage. However, reducing the word length too much tends to degrade the performance. Thus, developing optimum quantization methods is greatly needed for efficient implementation of neural network algorithms. Direct quantization of deep neural networks usually does not show satisfactory performance with very low precision weights. However, when the quantized weights are optimized by retraining, the fixed-point performance improves dramatically. Even ternary valued weights (+1, 0, and -1) for a DNN have yielded satisfactory performance <cit.>. Recently, several improved fixed-point optimization methods are developed by employing retraining based fine tuning <cit.>. Also, VLSI and FPGA based deep neural networks have been implemented using fixed-point weights <cit.>. In this work, an improved retraining algorithm is developed for fixed-point optimization of deep neural networks. The previous works decide the optimum quantization step size based on the distribution of floating-point weights and freezes the step-size during the retraining period <cit.>. The proposed algorithm adaptively determines the step-size at the re-quantization step during retraining. Since the weight values change much at the beginning of retraining, this approach is especially effective when applied at initial retraining epochs. In order to change the weight values less abruptly, we also propose and evaluate the gradual quantization method.In this schemes, floating-point weights are converted to, for example, 6-bit weights, which are then converted to 4-bit weights, and so on. We evaluate the proposed schemes in three different networks: feed-forward deep neural networks (FFDNNs), convolutional neural networks (CNNs), and recurrent neural networks (RNNs). The proposed methods yielded better results compared to the previous retrain-based quantization schemes. The rest of this paper is organized as follows. Section <ref> presents the proposed quantization with step size adaptation during the retraining procedure. The gradual quantization scheme is also explained. Experimental results on FFDNN, CNN, and RNN applications are shown in Section <ref>. Concluding remarks follow in Section <ref>.§ STEP SIZE ADAPTATION AND GRADUAL QUANTIZATION FOR RETRAINING OF DEEP NEURAL NETWORKS In this section, we explain the conventional retrain based fixed-point optimization algorithm, and present adaptive step size retraining and gradual quantization methods. §.§ Retrain-based fixed-point quantization review The original retrain based fixed-point optimization algorithm can be represented briefly as shown in  <ref>. Note that, conventional algorithms <cit.> do not compute Δ_new at the `weights update' stage. In this figure, after obtaining the floating-point weights by training, the quantization step size, Δ, is determined by minimizing the L2 error between the floating-point and fixed-point weights. For the convenience of arithmetic, uniform quantization is assumed. Two algorithms have been developed for the quantization step size optimization. One is an exhaustive search, which decides the initial quantization step size Δ_initial by considering the weight distribution, and then searches the best performing step size between Δ_initial/2 and 2·Δ_initial by testing the quantized network with the evaluation set <cit.>. The second approach is deciding the quantization step size by measuring the mean and the variance of the floating-point weights <cit.>. Then, in the second stage of  <ref>, floating-point weights are rounded to fixed-point values by using the determined quantization step size. The third stage is the inferencing or the forward stage with the quantized network, w^(q). The error signal is calculated and used for backward propagation.The gradient is calculated and weight update is conducted. Note that the floating-point weights, instead of the fixed-point values, are updated because the amount of weight update is usually much smaller than the quantization step size. Then, the fixed-point weight update, yielding w_ij, new^(q), is accomplished by quantizing the updated floating-point weights. Note that determining Δ_new is not performed in the conventional method, and the same quantization step size is used at every iteration. §.§ Step-size adaptation during retraining As described in Section <ref>, the conventional method freezes the step size during the retraining. However, in many cases, the weight values change much by retraining. Note that the amount of change decreases as the retraining iteration progresses. Thus, it is advantageous for improving the performance to adjust the quantization step size during the retraining. Especially, the need of step size adaptation is greater at the beginning of retraining. The proposed scheme adds the determination of Δ_new at the weight update stage of  <ref>. We do not perform `exhaustive search' anymore but update the quantization step size during retraining by using the L2 error minimization between the floating-point and fixed-point weights. We consider two different quantization step size update timing. The first one is `epoch-level update', and the other is `1 epoch update & fix'. The `epoch-level update' changes the step size at every epoch. The `1 epoch update & fix' updates the step size only during one or two epochs and freezes it for the remaining epochs. In our empirical evaluation, the first scheme is good for FFDNNs, but the second one shows better resultsfor CNNs and RNNs. The specific results will be given in Section <ref>. §.§ Gradual quantization scheme We also propose another step size adaptation approach which is similar to the curriculum learning. The curriculum learning is a training strategy to move the goal from an easy level to more complex one gradually <cit.>. One of the important points in curriculum learning is how to organize the tasks from easy to complex ones. We consider that the fixed-point optimization with a small number of bits is a more difficult problem than that with a large one. In the proposed scheme, we begin fixed-point optimization with a fairly high precision, such as 6 bits, and then keep lowering the word-length by one bit with retraining for each precision. At each retraining process with a given precision, we also combine the proposed quantization step size adaptation scheme. The experiments are conducted for FFDNNs.§ EXPERIMENTAL RESULTS The proposed step size adaptation is evaluated for three applications. We employ FFDNNs for phoneme recognition, CNNs for house number recognition, and RNNs for language modeling. To analyze the effect of step size adaptation, we change the size of each network and their word lengths. §.§ Phoneme recognition using feed-forward deep neural networks The FFDNN is trained with the TIMIT corpus <cit.>, and the detailed experimental condition for the data preprocessing is the same with <cit.>. We construct 11 consecutive frames as the network input. The output layer supports 61 labels, and the labels are merged into 39 classes for the final evaluation. For performance evaluation, the number of units in each layer increases from 64 to 1024. We train the floating-point networks using the stochastic gradient descent (SGD) with Nesterov momentum <cit.>. The learning rate decreases from 2e-3 to 3.90625e-6 with a factor of 2 when the development set does not show improvements for 4 consecutive evaluations. For fixed-point networks training, all other conditions are the same with the floating-point case but the initial learning rate is 5e-4.The results of fixed-point optimization for FFDNNs with and without the step size adaptation are reported in  <ref>. The experiments also show the results with batch normalization (BN) <cit.>. The step size is updated using the `epoch-level update' until the end of the retraining.  <ref> shows that the floating-point network performance saturates at 512 units size when BN is applied, and at 256 units when BN is not used. When the unit size in each layer is 512 or smaller, the proposed algorithm yields better performance in both cases. For example, if the 512 units size network is quantized in 2-bit without BN, the differences between the floating-point and the fixed-point networks are 1.82% and 1% for `conventional' and `adaptive' schemes, respectively. In addition, the phoneme error rate of the 3-bit network optimized with the `adaptive' scheme (29.83%) is lower than that of the 4-bit quantized network with the `conventional' scheme (29.95%). BN improves the performance of both floating-point and fixed-point networks. Applying the `adaptive' method improves the performance. For example, if the layer unit size is 128 and 2-bit quantization is used, BN brings the performance gain of 3.42% when `adaptive' scheme is used. Therefore, the proposed `adaptive' method can be efficiently used with BN.When the unit size is large enough, the quantization scheme does not affect the performance much because a larger size network has a better resiliency to quantization <cit.>. Even the 4-bit quantized 512 units size network without BN shows the performance almost comparable to the floating-point 1024 units size network. When the network is trained with BN, it shows a similar trend. <ref> shows the step size Δ of the proposed adaptive scheme as the retraining progresses. Note that the step size is renewed at each epoch during retraining. As shown in this figure, the step size of the last layer varies much, while that of the first layer is almost constant. The step size adaptation is much needed for the last layer. We also evaluate the performance of the gradual quantization scheme. The results are reported in  <ref>. The floating point results show 29.61% error rate on the test set. The 6-bit word length shows slightly better accuracy than the floating point. Thus, we define the easiest task as the 6-bit quantization. In  <ref>, the `gradual' scheme yields better performance than the `conventional' strategy, but shows worse or similar results compared to the `adaptive' quantization. The combined strategy of the `adaptive' and `gradual' shows slightly better accuracy than the `adaptive' strategy in 4- and 3-bit quantization, but it is worse than the `adaptive' scheme in 2-bit quantization. Since there is no performance difference between the `adaptive' and `adaptive & gradual' scheme, we only employ the 'adaptive' scheme for CNN and RNN experiments.§.§ Image classification using convolutional neural networksImage classification experiments are performed on the SVHN dataset <cit.>. The dataset includes 600,000 labeled 32x32 three channel images from real world house numbers. For the data preprocessing, we employ the same method with <cit.>. The output label has ten units which represent the numbers from 0 to 9. For evaluation of the proposed scheme, we employ three different structures. We name the networks as `L', `C', and `V' which have the trainable parameters 60k, 84k, and 435k, respectively. The `L' network is Lenet5 <cit.>, `C' network is from <cit.>, and `V' network is constructedas VGG style which is from <cit.>. We train the floating-point networks using SGD with Nesterov momentum. The learning rate is decreased from 2-e2 to 3.125e-4 with a factor of 2 when the development set does not show improvement for 4 consecutive evaluations. For the fixed-point network training, the initial learning rate was 5e-4. The effects of step size adaptation in the CNNs are examined in  <ref>. The step size is updated using the `1 epoch update & fix' strategy. Our algorithm works well for `L' and `V' networks regardless of the weight precision, 2, 3, or 4 bits. However, the `C' networks with the conventional retraining show a better result when the weight precision is 4bits.Overall, the proposed method yields improved performances. §.§ Language modeling using recurrent neural networks Character-level language modeling predicts the next character, and is used for speech recognition and text generation. Since the input and output layers consider only alphabets, the input and output complexity are much lower than the word level language model. We adopt English Wikipedia dataset for training the character level language modeling. The dataset contains 100 MB English Wikipedia text. The input and output layers are composed of 256 units for one-hot encoded ASCII code. The RNN consists of three Long Short-Term Memory (LSTM) layers with a different number of memory cells ranging from 64 to 256 <cit.>. We train the RNNs using AdaDelta based SGD with 64 parallel input streams. The networks are unrolled 256 times and weights update is performed for128 forward steps. The learning rate starts from 5e-4 and decreases until 5e-8. For the step size adaptation, `1 epoch update & fix' strategy is employed. The fixed-point optimization results are reported in  <ref>. As with our previous FFDNN and CNN results, it shows much improved performances on low-precision weights or small size networks. § CONCLUDING REMARKS We have developed improved fixed-point weight optimization methods for deep neural networks. The first one adaptively determines the quantization step size by measuring the weight distribution during the retraining procedure. The second one is a curriculum style fixed-point optimization technique, which conducts fixed-point optimization from high- to low-precision gradually. The proposed work yields better quantization results in FFDNN, CNN, and RNN experiments. Especially the effectiveness of the proposed techniques increases when the number of quantization levels is small and the network size is not large enough.IEEEbib
http://arxiv.org/abs/1702.08171v1
{ "authors": [ "Sungho Shin", "Yoonho Boo", "Wonyong Sung" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20170227080058", "title": "Fixed-point optimization of deep neural networks with adaptive step size retraining" }
Lin Liu Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, and Department of Physics, Hunan Normal University, Changsha 410081, China North China Institute of Aerospace Engineering, Langfang 065000, Chinaliulin@hunnu.edu.cnT-bulge single photon quantum router with three level system Lin Liu Received: date / Accepted: date ============================================================ This paper considers the transmission characteristics of incidenting from an infinte coupled-resonator waveguide (CRW) or a semi-infinite CRW, respectively. The Nth cavity of a semi-infinite CRW intersecting with an infinite CRW and a cascade three level system(CTLS) form a T-bulge structure. Due to symmetry breaking boundary, the maximum of transfer rate would reach unity when light incidenting from the semi-infinite CRW, while reach 0.5 for infinite CRW case. The position of the intersection have effect on the location of the extreme point of the transmission coefficients changing with couple strength. 4D figures are used to illustrate the transmission characteristics, which is more visable. § INTRODUCTION Quantum networks composed of many nodes and channels, have many application in quantum communication. Quantum router, which acting as key component of quantum networks, can be achieved by the optical interactions of single photons and atoms, allowing the distribution of entanglement across the network <cit.>. Measurement-device-independent quantum key can distribute over a 404 km optical fiber <cit.>. The coupling strength can be increased at frequencies near resonances in free space. The employment of a cavity can enhance the coupling strength further <cit.>.Zhou et al have made good jobs on quantum router, utilising a controllable two-level system as a quantum switch for the coherent transport of a single photon <cit.>, a cyclic three-level system embedded in the junction of two infinite CRWs forming an X-type quantum router <cit.>, changing the cyclic type into a new type with an inversion center <cit.>. Then make alter both in the atom and the CRWs' structure, a two-level system (TLS) in a T-shaped waveguide made of an infinite CRW and a semi-infinite CRW <cit.>.Now, changing into a CTLS couple with two CRWs, the Nth cavity of a semi-infinite CRW intersecting with an infinite CRW, let we see what happens to the transmission characteristics of incidenting from each CRW.This paper is organized as follows: In Sec. 2, the model used in this paper is introduced. In Sec. 3 and Sec. 4, the single-photon scattering process is studied for waves incident from different CRWs. Finally, we conclude with a brief summary of the results. § THE MODELAs shown in Fig. 1, coupled resonators on the blue (red) line construct the infinite (semi-infinite) CRW, which is called CRW-a (-b) hereafter. And the Nth cavity of a semi-infinite CRW intersects with an infinite CRW. The CTLS is situated in the node, which is the cross point of the two CRWs.Hamilton of CTLS is belowH_T=ω_e| e⟩⟨ e| + ω _f| f⟩⟨ f|,the CTLS's transition frequency of | g⟩↔| f⟩(| f⟩↔| e⟩)is ω_f(ω_e). Single photon hopes among cavities nearby, which lead to photon propagating along the single-mode cavities of the waveguides. Using tight-binding model, the two CRWs are described by the HamiltonianH_C = ∑^+∞_u=-∞[ω_aa^†_ua_u - ξ_a(a^†_ua_u+1 + a^†_u+1a_u)] +∑^+∞_v=1[ω_bb^†_vb_v - ξ_b(b^†_vb_v+1 + b^†_v+1b_v)]+ω_cc^†c.Hamiltonian of interaction between atom and two CRWs isH_I=| f⟩⟨ g| (g_aa_0+g_bb_N)+| g⟩⟨ f| (g_aa_0^++g_bb_N^+) +| e⟩⟨ f| g_cc+| f⟩⟨ e| g_cc^+.The system H consist of the two CRWs H_C, the CTLS H_T, and the interaction between atom and two CRWs H_I H=H_C+H_T+H_I. In single excitation subspace, eigenstate of the full hamiltonian is | E⟩=(∑^+∞_u=-∞U_u^aa_u+∑^+∞_v=1U_v^bb_v) | g,0_a,0_b,1_c⟩+U_f| f,0_a,0_b,1_c⟩+U_e| e,0_a,0_b,0_c⟩.Fock states of CRW-a(-b) and c mode are0_a,1_a,0_b,1_b,0_c,1_c. The Schrödinger Equation H| E⟩=E| E⟩ gives rise to a series of coupled stationary equations for all amplitudes, which is the discrete scattering equation for the single-photon propagation in the T-shaped waveguide ∑^+∞_u=-∞(E-ω_a-ω_c)U_u^a=∑^+∞ _u=-∞[-ξ_a(U_u-1^a+U_u+1^a)+δ_u,0(V_aU_0^a+GU _N^b)], ∑^+∞_v=1(E-ω_b-ω_c)U_v^b=∑^+∞_v=1 [-ξ_b(U_v-1^b+U_v+1^b)+δ_v,N(V_bU_N^b+GU_0^a)],where G≡ g_ag_bδ_e/[δ_e(E-ω_f-ω_c)-g_c^2],V_j=a,b≡ g_j^2δ_e/[δ_e(E-ω_f-ω_c)-g_c^2],the detuning between the incident photon frequency and atom transition frequency being δ_e≡ E-ω_e.§.§ Single photons incident from the infinite CRW-aWhen the single photons incident from the CRW-a, then CRW-a contains incident light, reflected light and transmitted light, and maybe existing transfer light in CRW-b both upwords and downwords. The CTLS absorbs the photon with wavenumber k incident along the u axis onto the T-bulge shaped waveguide, transitting the CTLS from its ground state to its excited state. Since the excited state is coupled to the continua of states, the excited CTLS will emit a photon spontaneously into the propagating state of either CRW-a or CRW-b. For symmetry breaking, the wave functions between the first site and the junction of the two CRWs have standing wave form depending on N. U_u^a={[ e^ik_au+re^-ik_au,u < 0;;te^ik_au,u > 0. ]. U_v^b={[t^be^ik_bv,v > N ;; Asin( k_bN), v=1,2,3..N . ]. The dispersion relations in the two CRWs are E=ω_c+ω_d-2ξ_dcos k_d,d=a,b.Using continue condition, the transmission, reflection and transfer amplitude can be obtained as below, with group velocity v_d=2ξ_dsin k_d,d=a,b. Nondimensionalize the the propagating amplitude(in case of g_a,g_b,v_a,δ_e0). In this form, the effect of N is clearer.t^b =2isin k_bN/g_av_b/g_bv_a-2g_b/ g_ae^ik_bNisin k_bN+i[ g_c^2/δ _e-( E-ω _s-ω _c) ] v_b/g_ag_b, t =-i[ g_c^2/δ _e-( E-ω _s-ω _c) ] v_a/g_ag_b-2g_b/g_a e^ik_bNisin k_bN/g_av_b/g_bv_a-2g_b/g_a e^ik_bNisin k_bN+i[ g_c^2/δ _e-( E-ω _s-ω _c) ] v_b/g_ag_b, r =-g_av_b/g_bv_a/g_av_b/g_bv_a-2 g_b/g_ae^ik_bNisin k_bN+i[ g_c^2/δ _e-( E-ω _s-ω _c) ] v_b/g_ag_b ,The transmission coefficients are modulus square of the transmission amplitude, T_b^a=|t^b|^2, T^a=|t|^2, R^a=|r|^2. The total possibility is conserved(T_b^a+ T^a+R^a=1).Fig2 shows the propagation coefficients change with couple strength g_a,g_b,g_c. The centre of the 4D picture has a cuboid margin so that it's convenient to see the inner side. When g_a is small, T^a is tend to be one with less effect of g_b and g_c, while T_b^a and R^a is tending to be zero. When g_b is small, g_a is between ξ_0 and 6ξ_0, T^a first decreases and then increases when g_c increases, while R^a is vice versa. §.§ Single photons incident from the semi-infinite CRW-bThe single photons is launched into the semi-infinite CRW-b. When the traveling photon arrives at the node of the T bulge node, it is either absorbed by the CTLS or stored in the N-1 FP cavities (reflected by the boundary when N=1). The probability amplitudes in the CRW-a and CRW-b are given by U_u^a={[ t_l^ae^-ik_au, u < 0;;t_r^ae^ik_au, u > 0. ]. U_v^b={[ e^-ikv+r^be^ik_bv,v > N ;; Asin( k_bN), v=1,2,3..N . ]. Using continue condition, obtaining t_l^a=t_l^a≡ t^a, the reflection and transfer amplitude can be obtained as belowr^b =-g_av_b/g_bv_a-2ig_b/g_a e^-ik_bNsin k_bN+i[ g_c^2/δ _e-( E-ω _s-ω _c) ] v_b/g_ag_b/ g_av_b/g_bv_a-2ig_b/g_ae^ik_bNsin k_bN+i[ g_c^2/δ _e-( E-ω _s-ω _c) ] v_b/g_ag_b, t^a =2iv_b/v_asin k_bN/g_av_b/g_bv_a -2ig_b/g_ae^ik_bNsin k_bN+i[ g_c^2/ δ _e-( E-ω _s-ω _c) ] v_b/ g_ag_b The photons incidenting from CRW-b go left and right into CRW-a with same possibilities. It's can be verified that possibility is conserved R^b+T_a^b=1,R^b=|r^b|^2,T_a^b=2|t^a|^2.From Fig.3, R^b tend to be small when g_a g_b and g_c to be large. But when g_a and g_b are small, R^b is large no matter g_c is large or small. T_a^b is vice versa for the reason that T_a^b=1-R^b.§ DISCUSSION AND CONCLUSIONThe effect of parameter g_a g_b g_c N on the transmission coefficients of incidenting from infinte CRW-a as well as semi-infinite CRW-b are considered in this paper. When light incident from CRW-a-b, the maximum of T_b^a (T_a^b) would be 0.5 (1). This can be predicted for the symmetry breaking boundary. The cavity number N in the junction of the two CRWs have effect on the location of the extreme point of the curve between the transmission coefficients and couple strength. This work is supported by the National Fundamental Research Program of China (the 973 Program) under Grant No. 2012CB922103 and the National Natural Science Foundation of China under Grants No. 11374095, No. 11422540, No. 11434011, No. 11575058. 8 NAT453 H.J. Kimble, The quantum internet, Nature (London) 453, 1023-1030 (2008).PRL190501 Hua-Lei Yin et al, Measurement-Device-Independent Quantum Key Distribution Over a 404 km Optical Fiber, PRL 117, 190501 (2016)PRA043807 Daniel Oblak et al, Quantum-noise-limited interferometric measurement of atomic noise: Towards spin squeezing on the Cs clock transitionPhys. Rev. A 71, 043807 (2005)PRA044304 K. Hammerer et al, Teleporting a rotation on remote photons, Phys. Rev. A 70, 044304 (2004)PRL111(13) L. Zhou, L.P. Yang, Y. Li, and C.P. Sun, Quantum routing of single photons with a cyclic three-level system, Phys. Rev. Lett. 111, 103604 (2013).PRL100501 L. Zhou, Z.R. Gong, Y.-x. Liu, C.P. Sun, and F. Nori, Controllable scattering of a single photon inside a one-dimensional resonator waveguide, Phys. Rev. Lett. 101, 100501 (2008).OE23(15) J. Lu, Z. H. Wang, and L. Zhou, T-shaped single-photon router, Opt.Exp. 23, 22955 (2015).PRA89(14)013805 J. Lu, L. Zhou, L.M. Kuang, and F. Nori, Single-photon router: Coherent control of multichannel scattering for single photons with quantum interferences, Phys. Rev. A 89, 013805 (2014).
http://arxiv.org/abs/1702.07994v1
{ "authors": [ "Liu Lin" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170226070836", "title": "T-bulge single photon quantum router with three level system" }
Contributed research article XX YY 20ZZ AAAA dotCall64: An Efficient Interface to Compiled and Fortran Code Supporting Long Vectors by Florian Gerber, Kaspar Mösinger and Reinhard Furrer December 30, 2023 ======================================================================================The  functions .C() and .Fortran() can be used to call compiled and Fortran code from .This so-called is convenient, since it does not require any interactions with the C API of . However, it does not support (, vectors of more than 2^31 elements). To overcome this limitation, the  package dotCall64 provides , which can be used to call compiled and Fortran functions. It transparently supports and does the necessary castings to pass vectors to arguments of the compiled code. Moreover, features a mechanism to avoid unnecessary copies of function arguments, making it efficient in terms of speed and memory usage.§ INTRODUCTIONThe interpreted character of makes it a convenient front-end for a wide range of applications.Although provides a rich infrastructure, it can be advantageous to extend programs with compiled code written in or Fortran <cit.>.According to <cit.>, reasons for such an extension arethe access to new and trusted computations, the increase in computational speed, and the object referencing capabilities.For completeness, we also list the reasons against such an extension, which include an increased workload to write, maintain, and debug the software, platform dependencies, and a less readable source code.provides two types of interfaces to call compiled code documented in “Writing Extensions” <cit.>. First, thefeature the functions .Call() and .External().It enables accessing, modifying, and returning objects from using the C API of <cit.>.On one hand, this is convenient when the code is specifically written to be used with . In that case, the C API serves as a glue between and , providing some functionality and control over copying objects on the level.On the other hand, it requires the user to learn the C API of . Especially, when an interface is built on top of existing code this constitutes an additional effort. Since has no Fortran API, the are not suitable to embed Fortran code into . Second, theprovides the functions .C() and .Fortran(). This interface allows the compiled code to read and modify atomic vectors,which are exposed as the corresponding and Fortran types, respectively. Thus, no additional API is required, making it favorable for embedding and Fortran code that is not specifically designed for .On top of these interfaces provided by ,  packages exist that simplify the integration of compiled code into . One such  package is inline <cit.>, which allows the user to dynamically definefunctions and S4 methods with inlined compiled code.Other examples are Rcpp <cit.> and its extensions RcppArmadillo <cit.>, RcppEigen <cit.>, RcppParallel <cit.>, and Rcpp11 <cit.>, which greatly simplify the extension of with C++ code.Similar to the , the Rcpp package family is designed to extend with compiled code that is specifically written for that purpose. Building packages is a way to share compiled code across different platforms.(See, ,for comments on including portable C++ code in packages.) As of , 2'303 of the 9'079  packages on CRAN (<http://www.cran.r-project.org/>) include compiled andor Fortran code using both the and the with a similar frequency. Figures <ref> gives an overview of the number of packages using .C(), .Fortran(), .Call(), and .External(). In the remainder of this article, we focus on the intention to embed compiled code into without using its C API. An example of an  package using that type of interface is the SPArse Matrix package spam <cit.>, which is built around the Fortran library SPARSKIT <cit.>. Here, the function .Fortran() from the seems to be suitable. Conversely, using the is also possible but requires adding an additional layer of C code to enable communication between and the compiled Fortran code.However, using .Fortran() is also not satisfying, since it lacks flexibility and performance, as also stated in its help page: “These functions [.C() and .Fortran()] can be used to make calls to compiled C and Fortran 77 code.Later interfaces are ‘.Call’ and ‘.External’ which are more flexible and have better performance.” Two of the missing features of the are: * support of ,* a mechanism to avoid unnecessary copies of vectors.The latter is the reason for the lower performance of the compared to the .Since the does not allow vectors to be passed to compiled code by reference (without copying), it is especially impractical for big data application. The missing features of the motivated the development of the  package dotCall64 <cit.>, which is presented in this article. § LIMITATIONS OF THE To set the scene for dotCall64, we first discuss some limitations of the and give insights into the implementation of .§.§ Long vectorsThe does not support ; see help("long vector").To understand why extending it to support is a non-trivial task,we give more details on the implementation of . In , vectors are one of the most basic object types underlying more complex objects, such as matrices and arrays.They can be thought of as strings of elements that can be indexed according to their relative positions.Prior to version 3.0.0, the length of vectors was limited to 2^31-1 elements and indexing thereof was exclusively based on vectors of type integer. More precisely, the latter are signed vectors having a value range of [-2^31+1, 2^31-1].Starting from the release of version 3.0.0 in early 2013, support for so-called was supplied.That is, atomic (, , , , , and ) vectors, , and can now have up to 2^52 elements.The introduction of was done with minimal changes in and especially, without changing or adding a data type.Vectors of lengths less than 2^31-1 remain unchanged and addressing elements thereof still uses vectors of type . In contrast, use vectors of type to address elements, which are integer precise up to 2^52. This implied changes in some functions, such as length(), which returns an or a type depending on whether the input vector is a . > typeof(length(integer(1))) [1] "integer" > typeof(length(integer(2^31))) [1] "double"Note that as.numeric() returns a type and as.integer() returns an type,though both and type are of class "numeric", see the “Note on names” section in the help page help("is.double"). While the implementation of favors backwards compatibility, care is needed when manipulating those with compiled code.We distinguish between passing and indexing : The former requires passing vectors of more than 2^31-1 elements to complied code and is trivial.The latter is challenging, since the indexing vector is of type , whereas the compiled code would naturally expect a type.To overcome this discrepancy, one needs to cast the indexing vector from a to a type before calling the compiled code and back-cast it afterwards.Technical note: This section gives technical insights into the underlying C implementation of in and may be skipped without loss of the general idea.We refer to the source code of version 3.3.1 in several places and show relevant parts thereof in the appendix. Information on the current and future directions of and types in can be found in “R Internals” <cit.>.In , vectors are made out of a header of type VECSEXP that is followed by the actual data (Listing <ref>, line 272). The header contains a field length of type R_len_t, which is defined as signed (a ).Thus, that length field cannot capture the length of a . Instead, it is set to -1 whenever the length of the vector is larger than 2^31 - 1,and an additional header of type R_long_vec_hdr_t is prefixed. The prefixed header has a field length of type , which is defined as ptrdiff_t type(Listing <ref>, line 75) being “[...] the signed type of the result of subtracting two pointers.This will probably be one of the standard signed types (short int, int or long int), but might be a nonstandard type that exists only for this purpose” <cit.>. This implementation has the advantage that the existing code does not need to be changed and still works with vectors having less than 2^31 elements. Hence, the C code of can be changed successively to support throughout several versions, as opposed to changing the entire C code in one step.To make C code compatible with , adaptations are needed.For example, the widely used C function R_len_t length(SEXP s) (Listing <ref>, line 124) returns the length of a SEXP (S expression) as a R_len_t. Thus, all instances of that function have to be replaced with calls to the counterpart (, the function R_xlen_t xlength(SEXP s) given in line 159 of Listing <ref>).§.§ Copying argumentsThe exposes pointers to vectors to compiled code. In order to avoid any corruption of vectors, they are copied and the compiled code receives pointers to copies of the vectors.One exception is when the vector has the named status 0 (, the object is not bound to any symbol); see “Writing Extensions” <cit.>. This is the case when the passed vector is an evaluated constructor (, integer(1)). This is often used when the only purpose of the vector is to capture results from the compiled code.Another situation in which there is no need for copying vectors is when the compiled code only reads an vector without modifying it.However, the does not allow the user to avoid copying of vectors (with named status 1 or 2), which leads to a significant computational overhead, especially for large vectors. Note that prior to version 3.2.0, the copying of vectors could be avoided by setting the argument DUP of .C() and .Fortran() to FALSE. In later versions, this argument is depreciated and users are referred to the as a more flexible interface; see help(".C") and “R NEWS” <cit.>. § THE  PACKAGE DOTCALL64The limitations of the discussed above have motivated the development of the  package dotCall64. Its main function is , which can be used to interface compiled code. In contrast to .C() and .Fortran(),it supports and arguments of complied and provides a mechanism to control duplication of function arguments.Emphasis was put on providing a trustworthy implementation featuringstructured and C source code, documentation, examples, unit tests implemented with testthat <cit.>,and scripts containing the later presented performance measurements. §.§ Usage of the function The function can be used as an enhanced replacement of the and is equally easy to use; see also the documentation in the reference manual <cit.>. Its syntax resembles that of the function .C(), and both functions have common arguments as shown in Table <ref>. The required arguments of are: .NAME The name of the compiled function or Fortran subroutine. ... Up to 65 vectors to be accessed by the compiled code.SIGNATURE A character vector of the same length as the number of arguments of the .Each string specifies the signature of one such argument.Accepted signatures are "integer", "double", and "int64".The , , and Fortran types corresponding to these specifications are given in Table <ref>.With that, the following call to the compiled C function void get_c(double input, int index, double output) using .C() can be replaced by its counterpart.Therefore, for example, > .C("get_c", input = as.double(1:10), index = as.integer(9), output = double(1))becomes> .C64("get_c", SIGNATURE = c("double", "integer", "double"),+input = 1:10, index = 9, output = 0)While more detailed code examples are given later,this is enough to highlight some features of . First, does require the additional argument SIGNATURE specifying the argument types of the .In return, it coerces the provided vectors to the specified signatures making the as.double() and as.integer() statements unnecessary.Second, all provided arguments can be . Third, if one of the arguments of the compiled function is a (int64_t in the case of functions, and integer (kind = 8) types for Fortran subroutines), it is enough to set the corresponding SIGNATURE argument to "int64" to successfully evaluate the function. That is, does the necessary to and to castings before and after evaluating the compiled code, respectively.Additional arguments of are the following: INTENT A character vector of the same length as the number of arguments of the .Each string specifies the intent of one such argument.Accepted intents are "rw" (read and write), "r" (read), and "w" (write). NAOK A logical flag specifying whether the vectors passed though ... are checked for missing and infinite values. PACKAGE A character vector of length one restricting the search path of the to the specified package. VERBOSE If 0 (default), no warnings are printed. If 1 and 2, then warnings for tuning and debugging purposes are printed. A complete list of arguments including their default values is also given in Table <ref>.The argument INTENT influences the copying of vectors and can be seen as an enhanced version of the depreciated DUP argument of .C(). By default, all intents are set to “read and write” implying that the compiled code receives pointers to copies of the vector given to .... This behavior is desirable when the compiled function reads the corresponding vectors and modifies (writes to) them. For arguments of the that are only read and not modified, the intent can be set to “read.” With that, the compiled code receives pointers to the corresponding vectors itself. While this avoids copying, it is absolutely necessary that the compiled code does not alter these vectors,as this corrupts the corresponding vectors in the current session.For arguments that are only used to write results into it, the intent “write” is suitable.To obtain the desired performance gain, the corresponding vectors passed to ... have to be of class "vector_dc". objects of that class contain information on the type and length of the vectors. They can be constructed with the function vector_dc(), taking the same arguments as vector() from the base  package.For example, instead of passing the vector vector(mode = "numeric", length = 8), the following object should be passed. > vector_dc(mode = "numeric", length = 8) mode [1] "numeric"length [1] 8attr(,"class") [1] "vector_dc" "list" Based on this information, allocates the corresponding vector (initialized with zeros).That vector is then exposed to the compiled function to write into it. Note that specifying the suitable intent may reduce computation time by avoiding unnecessary copying of vectors and by avoiding unnecessary to and to castings for SIGNATURE = "int64" type arguments.More details on the other arguments are given in the package manual of dotCall64 <cit.>. §.§ Implementation of the function The function uses the function .External() from the to directly pass all provided arguments to the C function dC64(). After basic checks of the provided arguments, the function proceeds as schematized in Figure <ref>. Note that the flowchart depicts the procedure for the case in which the has only one argument.Otherwise, dC64() repeats the depicted scheme for all arguments. One aspect to highlight is the castings of vectors for SIGNATURE = "int64" arguments.For such arguments, the to casting is done for the intents “read and write” and “read”; see the boxes labeled with (a). In that case, duplication is not necessary, as the implemented casting allocates a new vector anyway. The back-casting from to is only done for the intents “read and write” and “write”; see the box labeled with (b).Moreover, an argument of SIGNATURE different from "int64" with intent “read and write” is duplicated in any case; see boxes labeled with (c). If the intent is “read,” it is not duplicated,and if the intent is “write,” the argument is only duplicated when it has a reference status different from 0. vectors increase their reference status when they are passed to an function, andtherefore a safe way to allocate a zero initialized vector without copying is to pass an object of class "vector_dc".As casting is an expensive operation in terms of computational time, we distribute this task to multiple threads using openMP, if available <cit.>. Note that the number of used threads can be controlled with the function omp_set_num_threads() from the package OpenMPController <cit.>. The package dotCall64 can also be compiled without the openMP feature by removing the flag $(SHLIB_OPENMP_CFLAGS) in the src/Makevars file of the source code.§ EXAMPLESWe showcase the function from the  package dotCall64 with an example function implemented in C and Fortran. Besides the calls thereof via , the C and Fortran function definitionsand the commands to compile and load the code are given. A direct comparison with .C() shows the limitations of the andthat it is straight forward to overcome these with .Moreover, the similarities and differences in the syntax become visible.The considered example function takes the arguments input (), index (), and output ()and writes the element of input at the position specified with index to output. §.§ Interface codeA C implementation of the described example function is given next.void get_c(double *input, int *index, double *output)output[0] = input[index[0] - 1]; We write the function into get_c.c and compile it with the command line command R CMD SHLIB get_c.c. The resulting dynamic shared object (get_c.so on our Linux platform) must be loaded into beforethe compiled function can be called. Note that, in the following code, the extension of the shared object is replaced with .Platform$dynlib.ext to make the code platform independent.> dyn.load(paste0("get_c", .Platformdynlib.ext))One can use theto call this function.We use thefunctions as.double() and as.integer() to ensure that the types of the passedvectors match the signature of the C function get_c(). > .C("get_c", input = as.double(1:10), index = as.integer(9), output = double(1))output [1] 9Next, we try to use the same call with a x_long passed to the argument input of get_c(). > x_long <- double(2^31); x_long[9] <- 9; x_long[2^31] <- -1 > .C("get_c",+input = as.double(x_long), index = as.integer(9), output = double(1))output Error: long vectors (argument 1) are not supported in .FortranAs expected, .C() throws an error because it does not support .The error—and the confusing error message referring to .Fortran() instead of .C()—canbe avoided by replacing .C() with . This allows the evaluation of the C function get_c() with the x_long. Additionally,requires the argument SIGNATURE encoding the signatures of the arguments of get_c().This information is used to coerce all providedvectors to the specified signatures. Thus, it is no longer necessary to reassure that the types of the passedvectors match the signature of the compiled function. > install.packages("dotCall64") > library("dotCall64") > .C64("get_c", SIGNATURE = c("double", "integer", "double"), +input = x_long, index = 9, output = double(1))output [1] 9In contrast to the call using .C(), the ninth element of the x_long is returned.However, the argument index of get_c() is of type ∫ (a ),and hence, elements at positions beyond2^31-1cannot be extracted. To overcome this, we adapt the definition of the C function get_c() and replace the ∫ type in the declaration of the argument index with thetype,which is defined in the C header file stdint.h. #include <stdint.h> void get64_c(double *input, int64_t *index, double *output)output[0] = input[index[0] - 1];We write the function into get64_c.c and compile it with R CMD SHLIB get64_c.c to obtain the dynamic shared object (get64_c.so on our platform). Because of theargument, it is not possible to call this function with .C().On the other hand,can interface this functionwhen the second element of the SIGNATURE argument is set to "int64".> dyn.load(paste0("get64_c", .Platformdynlib.ext)) > .C64("get64_c", SIGNATURE = c("double", "int64", "double"), +input = x_long, index = 2^31, output = double(1))output [1] -1In the call above, the functioncasts the argument index from(therepresentation of ) into atype vector before calling get64_c(),and back-casts it fromtoafterwards.§.§ Interface Fortran code The functioncan also be used to interface compiled Fortran code.To highlight some Fortran specific features, we translate the C function get_c() into the Fortran subroutine get_f(). subroutine get_f(input, index, output) double precision :: input(*), output(*) integer :: index output(1) = input(index) end Note that we only use lower case letters in the Fortran function and variable names to avoid unnecessary symbol-name translations. We write the function into the get_f.f and compile it with R CMD SHLIB get_f.f to obtain the dynamic shared object (get_f.so on our platform). In contrast to .Fortran(),allows passing pointers to . > dyn.load(paste0("get_f", .Platformdynlib.ext)) > .C64("get_f", SIGNATURE = c("double", "integer", "double"), +input = x_long, index = 9, output = double(1))output [1] 9Again, elements with positions beyond2^31-1cannot be accessed,since the argument index is of typeand compiled as aby default.To make get_f() compatible with , we can either change the declaration of index to integer (kind = 8) index in get_f.f orleave the Fortran code unchanged and set the following compiler flag to compileas .MAKEFLAGS="PKG_FFLAGS=-fdefault-integer-8" R CMD SHLIB get_f.f Note that both the kind = 8 declaration and the -fdefault-integer-8 flag are valid for the GFortran compiler <cit.> and may not have the intended effect using other compilers. The resulting dynamic shared object from the command above (get_f.so on our platform) can be called fromas follows.> dyn.load(paste0("get_f", .Platformdynlib.ext)) > .C64("get_f", SIGNATURE = c("double", "int64", "double"), +input = x_long, index = 2^31, output = double(1))output [1] -1§.§ Extendpackages to support long vectorsExtendingpackages to supportallows developers to distribute compiled code featuringwith anuser interface. Given the popularity of , this is a promising approach to make such software available to many users. With the function , the workload of extending anpackage to supportis reduced to the following tasks:*replace the R function to call compiled code with ,*replace thetype declarations in the compiled code with adeclaration.The latter task implies replacing all ∫ type declarations incode with int64_t type declarations and replacing alltype declarations in Fortran code with integer (kind = 8).In both cases, the replacements can be automatized (, with the stream editor ). If the considered Fortran code does not explicitly declare the bits of the integers,an alternative approach is to set the compiler flag -fdefault-integer-8 to compile integers asusing GFortran compilers.This is convenient because the Fortran code does not need to be changed at all in that case. A more elaborate extension could feature two versions of the compiled code: one withand the other one with .Then, thefunction can dispatch to either version according to the sizes of the involved vectors.This avoidstocastings when only vectors with less than2^31-1elements are involved.It is convenient to manage two versions of compiled code by putting them into two separate R packages.The first package includes the compiled code withtogether with thecode and the documentation.This package can be used independently as long as noare involved.The second package can be seen as an add-on package and includes only the compiled code with integers declared as .Thus, loading both packages enablessupport.This separation into two packages has the advantage that the compiled functions featuringand theircounterparts can have the same name. The desired function is then specified by setting the appropriate PACKAGE argument of .As a proof of concept, we extended the sparse matrix algebra  package spam to handlesparse matrices with more the2^31-1non-zero elements.From the user perspective, the syntax to manipulate such matrices remains the same. In fact, spam users may not even notice the extension. In the case, in which the number of non-zero entries of a matrix exceeds2^31-1andthe add-on package spam64 is loaded, spam automatically dispatches to the compiled code with .The new capabilities of spam and spam64 were illustrated with a parametric model of a non-stationary spatial covariance matrix fitted to satellite data. More information on spam64 and the data example is given by <cit.>.§ PERFORMANCEThere are different settings in which the elapsed time to interface compiled code is relevant. One of those is when the compiled code is interfaced often and takes only a short time to evaluate.Here, the overhead of the interface becomes relevant, which is in the order of a few microseconds for . Another such setting is when large and possiblyare passed through . In that case, the overhead is negligible, as other services of the interface and the execution of the compiled code take up several orders of magnitude more time. Whenis used to interfacearguments of the compiled code,the largest share of the elapsed time is caused by thetoandtocastings.Since castings are implemented with openMP, the elapsed time thereof also depends on the number of used threads.Besides that, copying objects and checking them for missing infinite values are also time-consuming operations. Another performance aspect is peak memory usage.Using the default arguments of , its peak memory usage is about twice the size of thevectors passed through ..., and hence, is similar to .C().An exception where the peak memory usage is reduced is indicated below.§.§ Performance relevant arguments of Further,provides arguments to optimize calls to compiled code, one of which is the argument INTENT, which is set to “read and write” by default. Since manyonly read or write to certain arguments, it is safe to avoid copying in some cases.For example, the C function get64_c(), as defined above, only reads the arguments input and index and only writes to the argument output. Thus, we can set the INTENT argument ofto c("r", "r", "w") and pass the argument with intent “write” as objects of class "vector_dc" to reduce the copying ofvectors to a minimum. Another significant performance gain is obtained by setting the argument NAOK to TRUE. This avoids checking thevectors passed through ... for NA, NaN, and Inf values.Small-scale performance gains can be achieved by setting the PACKAGE argument, which reduces the time to find the compiled code, and by setting VERBOSE= 0, which avoids the execution of getOptions("dotCall64.verbose").Similar speed considerations that are partially applicable toare given in “WritingExtensions”<cit.>. An optimized version of the call to the C function get64_c(), taking the discussed performance considerations into account, is given next.> .C64("get64_c", SIGNATURE = c("double", "int64", "double"), +input = x_long, index = 2^31, output = numeric_dc(1), +INTENT = c("r", "r", "w"), NAOK = TRUE, PACKAGE = "dotCall64", VERBOSE = 0)§.§ Timing measurementsIn the following, we present detailed timing measurements and benchmarkagainst .C(), where possible.We consider the following C function contained in thepackage dotCall64. void BENCHMARK(void *a) This function takes one pointer a to a variable of an unspecified data type and does no operations with it.Thus, the elapsed time to call BENCHMARK() fromis dominated by the performance of the used interface. We measure the time to call this function with different NAOK and INTENT settings ofand benchmark it against .C() using microbenchmark<cit.>.To get an estimate of the measurement uncertainty, we repeated the measurements between100and10'000times and report the median elapsed time as well as the interquartile range (IQR) of the replicates. Naturally, timing measurements are platform dependent.We produced the presented results on Intel Xeon CPU E7-2850 2.00 GHz processors using aLinux environment wherewas installed with default installation flags.When not indicated differently, the measurements were produced using a single thread. First, we consider the situation in which a pointer to anvector of length one is passed to the compiled C function BENCHMARK(). The following truncatedcode illustrates how the measurements were performed. The completescripts implementing all presented performance measurements are available in the benchmark directory in the source code of dotCall64.> library("microbenchmark") > int <- integer(1) > microbenchmark( + .C("BENCHMARK", a = int, NAOK = FALSE, PACKAGE = "dotCall64"), + .C64("BENCHMARK", SIGNATURE = "integer", a = int, INTENT = "rw",+NAOK = FALSE, PACKAGE = "dotCall64", VERBOSE = 0), + .C64("BENCHMARK", SIGNATURE = "integer", a = int, INTENT = "r", +NAOK = FALSE, PACKAGE = "dotCall64", VERBOSE = 0), ...Since thevector int is very short,a large part of the elapsed time in this experiment is caused by the overhead of the interfaces.Table <ref> presents the resulting timing measurements in microseconds.They indicate that .C() is more than two times faster compared to .However, this is not surprising, sinceis more flexible and therefore has a larger overhead.The arguments NAOK and INTENT have little influence on the elapsed times. The IQRs of around one microsecond indicate a relatively large variability of the elapsed time, which is typical for short timing measurements.We repeated the same experiment with vectors of length2^28.Now, the elapsed times are dominated by services of the interfaces (, checking for missing infinite values, copying, and casting). The timings in seconds are presented in Table <ref>.They indicate thatwith argument INTENT= "rw" and .C() showed similar elapsed times. When the intent is set to “read” (INTENT= "r"), the elapsed times were reduced and dropped to0.00seconds for some configurations.Moreover, not checking for missing infinite values (NAOK= TRUE) decreases the elapsed times across all considered cases. The castings of SIGNATURE= "int64" arguments seems to be the most time-consuming task.Note that the IQRs are now smaller relative to the measured timings, because the measured times are larger. In a second series of timing measurements, we consider the situation in which a pointer to a vector is passed to the compiled code to write into the vector. We measure the elapsed times of this task as shown in the following truncatedcode.> microbenchmark( + .C("BENCHMARK", a = integer(2^28), NAOK = TRUE, package = "dotCall64") + .C64("BENCHMARK", SIGNATURE = "integer", a = integer(2^28), INTENT = "rw",+NAOK = TRUE, package = "dotCall64", VERBOSE = 0) + .C64("BENCHMARK", SIGNATURE = "integer", a = integer_dc(2^28), INTENT = "w",+NAOK = TRUE, package = "dotCall64", VERBOSE = 0), ...Note the usage of integer_dc(), which creates a list containing the length and class of the vector. This information is then used byto create the corresponding vector in C. Table <ref> shows the timing measurements for the described setting. As expected usingwith INTENT= "w" reduces the elapsed timescompared to INTENT= "rw" substantially.Furthermore, .C() and with INTENT= "w" have similar elapsed times. While .C() relies on the reference counting mechanism ofobjects to avoid copying <cit.>,uses the "vector_dc" class. The latter has the advantage that onetocasting can be avoided in the SIGNATURE= "int64" case. The functionfeatures an openMP implementation of thetoandtocastings of SIGNATURE="int64" arguments. Hence, the computational workload of the castings can be distributed to several threads running in parallel. To quantify the performance gain related to using openMP,we control the number of used threads to be between1and10with the  package OpenMPController and measure the elapsed times of the following call. > .C64("BENCHMARK", SIGNATURE = "int64", a = a, INTENT = "rw", NAOK = TRUE, +PACKAGE = "dotCall64", VERBOSE = 0)We let a bevectors of length2^16,2^22,2^28, and2^34and performed five replicated timing measurements for each configuration. The results are summarized in Figure <ref>. The reduction in computation time due to using multiple threads is greatest for the vectors of length2^34, where using10threads reduced the elapsed times by about70%. Conversely, for the vector of length2^16no reduction was observed.§ SUMMARYThis paper presents the  package dotCall64, which provides an alternative to .C() and .Fortran() from the .In the first section, we introduce 's interfaces to embed compiledand Fortran code.We argue that, in some situations, a .C() type interface is more convenient compared to using the C API ofin conjunction with the .In section two, we motivate the development of dotCall64 with a discussion of missing features of theand an overview of theimplementation of . Then, we present the usage and the implementation of thefunction from thepackage dotCall64. This is followed by examples demonstrating the capabilities of the new interface—also in comparison with the .Furthermore, we discuss strategies to extend entire  packages with compiled code supporting .In the last section, we present performance measurements of theinterface and benchmark it against .C().This highlights the speed gains achieved by avoiding unnecessary copies ofvectors and by using openMP for castingvectors.In conclusion, the interface provided by the  package dotCall64 is an up-to-date version of theincluding tools to conveniently embed compiled code manipulating . § ACKNOWLEDGMENTSWe thank Rafael Ostertag for contributions to Figure <ref>.We acknowledge the support of the University of Zurich Research Priority Program (URPP) on “Global Change and Biodiversity.”Florian GerberDepartment of MathematicsUniversity of Zurich8057 Zurich Switzerland florian.gerber@math.uzh.chKaspar MösingerDepartment of MathematicsUniversity of Zurich8057 Zurich Switzerland kaspar.moesinger@gmail.comReinhard FurrerDepartment of Mathematics and Department of Computational ScienceUniversity of Zurich8057 Zurich Switzerland reinhard.furrer@math.uzh.ch§ APPENDIX: R SOURCE CODEIn the following, we show parts of the C source code ofversion 3.3.1 to support the understanding of theimplementation.More precisely, the lines 26–377 from the file Rinternals.h and the lines 124–191 from the file Rinlinedfuns.h are shown in Listing <ref> and Listing <ref>, respectively.The indicated line numbers in the code refer to the actual line numbers of the corresponding file.[firstnumber=26,firstline=26,lastline=377,label=list:Rinternals,caption=R-3.3.1/src/include/Rinternals.h]Rinternals.h[firstnumber=124,firstline=124,lastline=191,label=list:Rinlinedfuns,caption=R-3.3.1/src/include/Rinlinedfuns.h]Rinlinedfuns.h
http://arxiv.org/abs/1702.08188v1
{ "authors": [ "Florian Gerber", "Kaspar Mösinger", "Reinhard Furrer" ], "categories": [ "stat.CO" ], "primary_category": "stat.CO", "published": "20170227084249", "title": "dotCall64: An Efficient Interface to Compiled C/C++ and Fortran Code Supporting Long Vectors" }
^1 Quantum-Phase Electronics Center and Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan ^2 RIKEN center for Emergent Matter Science (CEMS), Wako 351-0198, Japan ^3 SUPA, School of Physics and Astronomy, University of St. Andrews, St. Andrews, Fife KY16 9SS, United Kingdom ^4 Department of Physics and Astronomy, Seoul National University, Seoul 08826, Korea ^5 Center for Correlated Electron Systems, Institute for Basic Science (IBS), Seoul 08826, Korea ^6 Center for Theoretical Physics (CTP), Seoul National University, Seoul 08826, Korea ^7 Suzhou Institue of Nano-Tech and Nanobionics (SINANO), CAS, 398 Ruoshui Road, SEID, SIP, Suzhou, 215123, China ^8 Diamond Light Source, Harwell Campus, Didcot, OX11 0DE, United Kingdom ^9 Max Planck Institute for Chemical Physics of Solids, Nöthnitzer Straße 40, 01187 Dresden, Germany ^10 Center for Quantum Spintronics, Department of Physics, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway ^11 MAX IV Laboratory, Lund University, P. O. Box 118, 221 00 Lund, Sweden ^12 Istituto Officina dei Materiali (IOM)-CNR, Laboratorio TASC, in Area Science Park, S.S.14, Km 163.5, I-34149 Trieste, Italy ^13 Synchrotron SOLEIL, CNRS-CEA, L'Orme des Merisiers, Saint-Aubin-BP48, 91192 Gif-sur-Yvette, France ^14 Laboratory for Materials and Structures, Tokyo Institute of Technology, Kanagawa 226-8503, Japan ^15 School of Physics and Center of Excellence on Advanced Functional Materials, Suranaree University of Technology, Nakhon Ratchasima, 30000, Thailand ^16 ThEP, Commission of Higher Education, Bangkok 10400, Thailand^* These authors contributed equally to this work ^† To whom correspondence should be addressed: bahramy@ap.t.u-tokyo.ac.jp ^ To whom correspondence should be addressed: philip.king@st-andrews.ac.uk Transition-metal dichalcogenides (TMDs) are renowned for their rich and varied bulk properties, while their single-layer variants have become one of the most prominent examples of two-dimensional materials beyond graphene. Their disparate ground states largely depend on transition metal d-electron-derived electronic states, on which the vast majority of attention has been concentrated to date. Here, we focus on the chalcogen-derived states. From density-functional theory calculations together with spin- and angle-resolved photoemission, we find that these generically host a co-existence of type-I and type-II three-dimensional bulk Dirac fermions as well as ladders of topological surface states and surface resonances. We demonstrate how these naturally arise within a single p-orbital manifold as a general consequence of a trigonal crystal field, and as such can be expected across a large number of compounds. Already, we demonstrate their existence in six separate TMDs, opening routes to tune, and ultimately exploit, their topological physics. Ubiquitous formation of bulk Dirac cones and topological surface states from a single orbital manifold in transition-metal dichalcogenides M. S. Bahramy^1,2,*,†, O. J. Clark^3,*, B.-J. Yang^4,5,6, J. Feng^3,7, L. Bawden^3, J. M. Riley^3,8, I. Marković^3,9, F. Mazzola^3, V. Sunko^3,9, D. Biswas^3, S. P. Cooil^10, M. Jorge^10, J. W. Wells^10, M. Leandersson^11, T. Balasubramanian^11, J. Fujii^12, I. Vobornik^12, J. E. Rault^13, T. K. Kim^8, M. Hoesch^8, K. Okawa^14, M. Asakawa^14, T. Sasagawa^14, T. Eknapakul^15, W. Meevasana^15,16, P. D. C. King^3, December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================== The classification of electronic structures based on their topological properties has opened powerful routes for understanding solid state materials. <cit.> The now-familiar ℤ_2 topological insulators are most renowned for their spin-polarised Dirac surface states residing in inverted bulk band gaps. <cit.> In systems with rotational invariance, a band inversion on the rotation axis can generate protected Dirac cones with a point-like Fermi surface of the bulk electronic structure. <cit.> If either inversion or time-reversal symmetry is broken, a bulk Dirac point can split into a pair of spin-polarised Weyl points. <cit.> Unlike for elementary particles, Lorentz-violating Weyl fermions can also exist in the solid state, manifested as a tilting of the Weyl cone.If this tilt is sufficiently large, so-called type-II Weyl points can occur, now formed at the touching of open electron and hole pockets. <cit.>Realising such phases in solid-state materials not only offers unique environments and opportunities for studying the fundamental properties of fermions, but also holds potential for applications exploiting their exotic surface excitations and bulk electric and thermal transport properties. <cit.> Consequently, there is an intense current effort focused on identifying compounds which host the requisite band inversions. In many cases, however, this depends sensitively on fine details of a material's electronic or crystal structure.This is partly because almost all known topologically non-trivial phases are stabilised by inversions between states derived predominantly from different atomic manifolds in two- (or more) component compounds (e.g. Bi and Se p orbitals in Bi_2Se_3; <cit.> Bi p and Na s orbitals in Na_3Bi; <cit.> Nb d and P p orbitals in NbP <cit.>). In contrast, here we uncover a simple and remarkably-robust mechanism for realising a hierarchy of band inversions within a single orbital manifold. Across the broad family of 2H- and 1T-structured transition-metal dichalcogenides (TMDs) <cit.>, we observe and classify how this mediates the formation of strongly-tilted type-I and type-II bulk Dirac cones as well as ladders of topological surface states (TSSs) and topological surface resonances.Band inversions from a single orbital manifoldFigure <ref> details the general principle underlying our findings. As a minimal model, we consider a 2-site system with space group C_3v, with 3×2 p-orbitals per site in a trigonal crystal field. Such an arrangement naturally describes, for example, the chalcogen layers of the 1T-TMDs (Fig. <ref>(a)). Fig. <ref>(c) summarises the splitting of the p-orbital energy levels as a result of bonding, crystal field splitting, and spin-orbit coupling. The bands that form from these will in general be anisotropic as the out-of-plane p_z orbitals will have much larger hopping along the out-of-plane direction than the in-plane p_x/y orbitals. For simplicity, we therefore initially neglect inter-layer hopping of the in-plane orbitals, leading to dispersionless E- (p_x/y)-derived levels as a function of the out-of-plane momentum, k_z. The A_1 (p_z-derived) bands, however, retain a strong k_z-dispersion (Fig. <ref>(d)). When the bandwidth arising due to inter-layer hopping becomes larger than the crystal field splitting (CFS), the A_1-derived band will cross through the E-derived ones, creating a set of k_z-dependent band inversions solely within the p-orbital derived manifold of states. In general, anti-crossing gaps can open at these intersections. This is indeed what should occur at the crossings of R_4^± with R_4'^∓ bands (Fig. <ref>(e)), as they both share the same symmetry character and angular momentum m_J=1/2. They have opposite parity, however, and thus their hybridization leads to an inverted band gap with a ℤ_2 topological order. Accordingly, these gaps can be expected to host topological surface states, as we demonstrate below. In contrast, the R_4^± and R_5,6^∓-derived bands belong to different irreducible representations. As a result, they behave differently under the application of the rotational operator C_3v (see Supplementary Fig. S1), and their crossing is protected against hybridization as long as it occurs at a k-point with C_3v symmetry and the host system has both inversion and time-reversal symmetries.<cit.> For the model considered here, this is satisfied for all k-points along the Γ-A direction of the three-dimensional Brillouin zone (k_x=k_y=0, varying k_z, see Fig. <ref>(b)). Consequently, the crossing of the R_4^± and R_5,6^∓-derived bands will lead to a single point of degeneracy (i.e., a bulk Dirac point) located part-way along this direction. Its location in momentum space is set both by the bandwidth of the R_4^±-derived band and by the strength of the CFS. In the schematic shown here (Fig. <ref>(e)), one branch of the Dirac cone is highly dispersive along k_z while the other is completely dispersionless. This would place such Dirac cones exactly on the boundary of a maximally-tilted `conventional' (i.e. type-I) Dirac cone and an over-tilted one (i.e. a type-II bulk Dirac cone, in analogy to the recent classification of type-II Weyl fermions <cit.>). In reality, the R_5,6^∓-derived band will still have a finite, if small, out-of-plane dispersion. The group velocity of this band will determine whether a strongly titled type-I or type-II Dirac cone is obtained.Bulk Dirac points and topological surface states in PdTe_2We show in Fig. <ref> that this simple model can be realised surprisingly well in the electronic structure of the TMD superconductor <cit.> 1T-PdTe_2 (space group: P3m1). The bands near the Fermi level are almost exclusively Te-derived (see also Supplementary Fig. S1). Along Γ- (Fig. <ref>(a)), two pairs of predominantly Te p_x,y bands are evident within the energy region E-E_F∼-1 to ∼2 eV (red colouring in Fig. <ref>(a)), which we assign as the crystal-field and spin-orbit split bonding and anti-bonding E bands in analogy with Fig. <ref>. They have modest out-of-plane dispersion, although much more significant dispersion can be observed along the in-plane directions consistent with their in-plane orbital character. In contrast, the p_z- (A_1)-derived states (cyan colouring in Fig. <ref>(a)) have a dispersion along Γ- that spans nearly the entire valence band bandwidth, and thus crosses through the E states as a function of k_z. Above the Fermi level, where the R_4^- band intersects the anti-bonding R_5,6^+ and R_4'^+ states, a clear type-I protected crossing (upper) and an avoided crossing (lower) are formed, respectively. A similar phenomenology is observed for the bands immediately below E_F: the first crossing of the p_z-derived band with the bonding R_5,6^- states leads to another protected BDP, this time of type-II character (see also Supplementary Fig S2). The second crossing is again gapped. In fact, the proximity of this final crossing to both the anti-bonding and bonding-like branches of the p_z-derived bands causes an additional inverted gap to open directly below this. The deeper one (E-E_F∼-1.7 eV in Fig. <ref>(a,b)) is generated directly by the anti-crossing of bonding R_4^+ and R_4'^- states, evident from a small kink structure near the A-point of the R_4' band. The shallower band gap (E-E_F∼-1.1 eV in Fig. <ref>(a,b)) results from the crossing of bonding R_4' with both anti-bonding R_4 and bonding R_4 states. As the latter two states have opposite parities the total parity of the lower band at the A-point becomes opposite to that at the Γ-point (see Supplementary Fig. S1 for an explicit calculation of band parities), and hence this is also an inverted band gap with ℤ_2 topological order.These features are well reproduced by our photon energy-dependent angle-resolved-photoemission (ARPES) measurements of the occupied electronic structure (Fig. <ref>(b)). While the measured spectral features are broadened due to the finite k_z-resolution of photoemission, a significant k_z dispersion of a number of states along Γ-A can still be observed. In the vicinity of E_F, we observe a light and more massive band which cross leading to an enhanced spectral weight at a binding energy of ∼0.65 eV close to the bulk A-point along k_z. The in-plane dispersion of these same states (insets of Fig. <ref>(c) and Fig. <ref>(c) and Supplementary Fig. S3) reveal diffuse “filled-in” intensity (again due to finite k_z-resolution) forming the upper part of this Dirac cone. Together, these observations and calculations therefore firmly identify the presence of type-II Dirac cones in PdTe_2, <cit.> arising due to the protected crossing of Te p_z- and p_x,y-crystal field-split states as they disperse differently with out-of-plane momentum. We note that spectroscopic signatures of the bulk Dirac cone extend up to the Fermi level and hence these Dirac fermions may carry signatures in transport measurements. <cit.>Additional states which are non-dispersive in k_z, and thus two-dimensional, are also evident in Fig. <ref>(b). Most prominent is a band visible at E-E_F∼-1.7 eV, an energy at which no bulk states are present along Γ-A. We thus assign this as a surface state. Its in-plane dispersion (Fig. <ref>(c) and Supplementary Fig. S4) shows a clear Dirac-like dispersion in the vicinity of Γ, and is well reproduced by our supercell calculations of the surface electronic structure (Fig. <ref>(d) and Supplementary Fig. S5, see Methods), confirming its surface-derived origin. This has recently been observed by Yan et al. <cit.> and assigned as a topological surface state. Our measurements and calculations fully support this assignment: we find that it is located within the k_z-projected band gap that arises from the lower of the two avoided crossings below the Fermi level, between the R_4^+ and R_4'^- bands identified above. To definitively identify its topological nature, we perform additional spin-resolved ARPES measurements (Fig. <ref>(e) and Supplementary Fig. S6). These reveal that this state is strongly spin-polarised (from fits to energy distribution curves (EDCs), we find an in-plane spin polarisation of 92±14% (73±16%) for the upper (lower) branch of this surface state). The spin lies almost entirely within the surface plane and is locked perpendicular to the in-plane momentum, thus exhibiting the helical spin texture that is a defining characteristic of surface states of topological insulators, as also found from our supercell calculations (Supplementary Fig. S4(c)). We refer below to this topological surface state as TSS2.More subtly, our supercell calculations also reveal an additional surface-localised state forming another two-dimensional Dirac cone-like feature located at the energy of the band gap opened by the crossing of the R_4^- and R_4'^- bands. Unlike for TSS2, however, the band gap in the bulk spectrum opened by this avoided crossing does not span the entire Brillouin zone in k_z. The spectral weight of the surface-derived feature therefore lies within the manifold of projected bulk states which disperse around this avoided crossing. It is therefore better defined as a surface resonance rather than a true surface state. Consistent with this, we find that its wavefunction is more extended below the surface than for TSS2 (Supplementary Fig. S5). Nonetheless, clear signatures of its in-plane Dirac-like dispersion are visible in our ARPES measurements at selected photon energies (Fig. <ref>(c)), while our spin-resolved measurements (Fig. <ref>(e)) reveal that it retains the spin-momentum locking characteristic of a TSS. Excitingly, therefore, our findings reveal how the band inversion created by the crossing of p-orbital E and A_1-like bands in PdTe_2 drives the formation of a topological state (we refer to this as TSS1) whose topological origin still requires its existence despite the additional presence of bulk states at the same energies and in-plane momenta, thereby creating a topological surface resonance. Intriguingly, we find an additional two-dimensional state evident as a non-dispersive feature in Fig. <ref>(b) that is pinned at exactly the energy of the bulk Dirac point. Tracking this state slightly away from the Dirac point along the Γ- in-plane direction, we find that it hosts a strong in-plane spin polarisation with the same sign as the upper branch of TSS1 (labeled SS in Fig. <ref>(e,f); see also Supplementary Fig. S6 which shows that this develops some out-of-plane spin canting along Γ-). Spin-polarised Fermi arc surface states intersecting the Dirac point would naturally be expected for, e.g., the (100) surface, where the bulk Dirac points project to different surface momenta (see Supplementary Fig. S7). <cit.> For the experimental (001) cleavage plane, however, the two bulk Dirac points project exactly on top of each other and so such surface Fermi arcs would not naively be expected. Nonetheless, we note that topological surface states pinned to the Dirac point have recently been reported in calculations for other type-II bulk Dirac systems. <cit.> The origin of the states observed here therefore requires further investigation. Irrespective, the experimental observation of an additional spin-polarised surface state here stands as a further example of the rich surface electronic structure that this compound possesses, driven by an intricate array of band inversions within the p-orbital manifold of its bulk electronic structure.Ubiquitous formation of BDPs and TSSsWe show in Fig. <ref> and Supplementary Fig. S8 how such band inversions can be found in multiple other TMDs with different local and global crystalline symmetries, and which exhibit widely varying bulk properties. We first consider the closely-related compound, 1T-PtSe_2. This is semi-metallic, with a smaller overlap of chalcogen-derived bonding and anti-bonding states than in PdTe_2. <cit.> The transition metal states again contribute relatively little near to the Fermi level, while the p_z-derived chalcogen band can be clearly resolved cutting through the p_x,y-derived states in the vicinity of E_F (Fig. <ref>(a)). A single type-II bulk Dirac cone and a pair of TSSs are stabilised in the occupied electronic structure just as for PdTe_2. These are evident in our supercell calculations (Fig. <ref>(b)) and well matched by our experimental ARPES measurements (Fig. <ref>(c,d)and Supplementary Fig. S9). The spin-orbit coupling of the Se manifold is weaker than that of Te, evident from both the smaller splitting between E-like states and from smaller anti-crossing gaps which open in the vicinity of unprotected band crossings. The local band gaps in which the TSSs reside are therefore smaller than in PdTe_2, causing the upper branches of the TSSs to rapidly “turn over” to maintain the surface-bulk connectivity as required by their topological origin. Nonetheless, in contrast to the common picture for well-known topological insulators such as Bi_2Se_3, the band inversions leading to such topological surface states, as well as the bulk Dirac cone formation, naturally survive this reduction in spin-orbit coupling strength. Indeed, the relevant energy scales for stabilising the topological surface states here are the p_z-derived bandwidth vs. the trigonal crystal field splitting. While increased spin-orbit coupling strength will open larger hybridisation gaps, our findings (see Fig. <ref>(c,d)) demonstrate how the topological surface states survive as topological surface resonances even in the limit where the hybridisation gap opened is significantly smaller than the dispersion of bulk electronic states around this. They should therefore be a very robust feature of the intrinsic p-orbital band inversions found here. The recent observation of a type-II BDP in PtTe_2 <cit.> can also be understood within the same classification that we present here, establishing our findings as generic to the group-10 TMD metals and semi-metals. <cit.> We further show in Supplementary Fig. S7 and Supplementary Fig. S8(a,b) how such bulk band crossings/inversions also occur for the high-temperature 1T phase of the group-9 TMD IrTe_2. In this system, the trigonal symmetry which protects the BDP is lost upon cooling through a structural phase transition,<cit.> raising prospects to investigate temperature-driven topological phase transitions and mass gap opening of the type-II Dirac fermions.Fig. <ref> shows how similar states are also stabilised for a different TMD polymorph: the 2H structure of WSe_2 (space group: P63/mmc). Our bulk band structure calculations along k_z (Fig. <ref>(a)), which are in good agreement with previous photon energy-dependent ARPES measurements, <cit.> reveal a strongly dispersive band with significant p_z orbital character. This is intersected by very weakly dispersing bands at around 1.5 and 1.9 eV (2.7 and 2.9 eV) below the valence band top which we attribute as the anti-bonding (bonding) E-like bands, respectively. Unlike for PdTe_2, the Fermi level lies in a band gap of both the transition-metal (formally in a d^2 configuration) and chalcogen-derived states, and so this system is a semiconductor.<cit.> Moreover, transition-metal and chalcogen-derived states are no longer well separated in energy, and so the E-like bands have a strong transition-metal d-orbital character intermixed with their Se p_x,y character. The more localised nature of the d vs. p orbitals, together with an increased inter-layer separation, leads to a significantly smaller out-of-plane dispersion of these E-like bands than for PdTe_2. Finally, the unit cell contains two MX_2 (M=transition metal, X=chalcogen) layers in the 2H structure, as compared to a single such layer in the 1T structure. This results in an effective backfolding of the bands about the Brillouin zone boundary along k_z, doubling each of the R_5,6^± and R_4'^± bands as seen in our calculations. The C_3v-symmetry enforced degeneracy of the R_4-R_5,6 crossings discussed above, however, still holds. Now, therefore, the crossing of the dispersive R_4 band with each of the back-folded R_5,6 bands leads to a pair of closely-spaced bulk Dirac cones. These are almost maximally tilted and, unlike for PdTe_2,now additionally host significant transition-metal character at the BDP. Intriguingly, as the back-folding by definition changes the sign of the band's group velocity, this leads to stacked Dirac points of opposite character (type-II and type-I for the upper and lower crossings, respectively). We observe clear spectral signatures of the in-plane dispersion of these Dirac cones (Fig. <ref>(c)), but are unable to resolve a splitting of the two cones experimentally due to their small energy separation and strong three-dimensional dispersions. Both crossings of the R_4 and back-folded R_4' bands become gapped, and would therefore be expected to host topological surface states/resonances as in PdTe_2. One such band gap is too small to resolve experimentally, while for the lower branch a clear inverted band gap is opened. Our supercell calculations (Fig. <ref>(b)) indeed reveal the TSS located within this band gap, spanning between the manifold of bulk states above and below the band gap. Although the resulting band gap is small, the in-plane dispersion is significant. Our ARPES and spin-ARPES measurements (Fig. <ref>(c) and Supplementary Fig. S10) show clear evidence for the existence of the resulting surface state, its band-gap crossing nature, and its chiral spin polarisation. As shown in Supplementary Fig. S8(c-f), we find similar bulk Dirac cones and inverted band gaps in other 2H-structured TMDs, TaSe_2 and NbSe_2 (space group: P63/mmc), despite them hosting a different layer stacking sequence as compared to WSe_2. This opens the exciting prospect to investigate the influence of charge order, which these compounds host <cit.>, and the consequent reconstruction of the electronic structure, on the topological and bulk Dirac states. Tuneability and robustness against inversion symmetry breakingThe principle underlying the formation of bulk Dirac cones and topological surface states here is very general, and can be expected to occur across numerous materials systems. Moreover, our demonstration of their existence across multiple TMDs indicates that there is still significant opportunity to tailor the properties, locations, and nature of these states. To show this explicitly, we construct a tight-binding model for our minimal 2-site system considered in Fig. <ref>. Fig. <ref> shows how varying the inter-layer hopping both within and between neighbouring unit cells, as well as adjusting the ratio of σ-type and π-type inter-layer interactions, leads to a rich array of coexisting topological states and phases. Controlling these experimentally should be possible by varying the degree of covalency in the system and tuning the out-of-plane lattice parameter via atomic substitution or applied uniaxial pressure or strain along the c-axis. Such a strain field would not affect the trigonal symmetry which protects the Dirac points within the inverted phases, but could be used to traverse the phase boundaries, providing powerful routes to tuneable topological phase transitions and the creation or annihilation of bulk Dirac points in TMDs. Moreover, the insights gained here suggest strategies for the design of Dirac and topological phases. As an illustration of this, we consider replacing one of the Te layers in PdTe_2 by Se.In contrast to PdTe_2, this structure is non-centrosymmetric.Typically, such a loss of inversion symmetry would be assumed to lift the spin degeneracy, splitting the Dirac point into a pair of Weyl points.In contrast, since the PdTeSe structure we consider retains trigonal symmetry, we find that both spin-degeneracy and the protected Dirac crossing are maintained along the rotational axis (Γ-A), but spin degeneracy is lost elsewhere (Supplementary Fig. S11). The Dirac point in this case can therefore be considered as a protected degeneracy of two Weyl points that would not typically be expected. Our study thus opens routes to the rational design of topological materials, and indicates just how wide a purview topological band structure effects can be expected to have.Acknowledgements We thank R. Arita and N. Nagaosa for useful discussions and feedback and F. Bertran and P. Le Fèvre for ongoing technical support of the CASIOPEE beam line at SOLEIL. We gratefully acknowledge support from the CREST, JST (Nos. JPMJCR16F1-16F2), the Leverhulme Trust, the Engineering and Physical Sciences Research Council, UK (Grant Nos. EP/M023427/1 and EP/I031014/1), the Royal Society, the Japan Society for Promotion of Science (Grant-in-Aid for Scientific Research (S); No. 24224009 and (B); No. 16H03847), the International Max-Planck Partnership for Measurement and Observation at the Quantum Limit, Thailand Research Fund and Suranaree University of Technology (Grant No. BRG5880010) and the Research Council of Norway through its Centres of Excellence funding scheme, project number 262633, “QuSpin”, and through the Fripro program, project number 250985 “FunTopoMat”. This work has been partly performed in the framework of the nanoscience foundry and fine analysis (NFFA-MIUR Italy, Progetti Internazionali) facility. B.-J. Y. was supported by the Institute for Basic Science in Korea (Grant No. IBS-R009-D1), Research Resettlement Fund for the new faculty of Seoul National University, and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. 0426-20150011). OJC, LB, JMR and VS acknowledge EPSRC for PhD studentship support through grant Nos. EP/K503162/1, EP/G03673X/1, EP/L505079/1, and EP/L015110/1. IM acknowledges PhD studentship support from the IMPRS for the Chemistry and Physics of Quantum Materials. We thank Diamond Light Source (via Proposal Nos. SI9500, SI12469, SI13438, and SI14927) Elettra,SOLEIL, and Max-Lab synchrotrons for access to Beamlines I05, APE, CASSIOPEE, and i3, respectively, that contributed to the results presented here.Author Contributions. MSB and BJY performed the theoretical calculations. The experimental data was measured by OJC, JFe, LB, JMR, IM, FM, VS, DB, SPC, MJ, JWW, TE, WM, and PDCK, and analysed by OJC. ML, TB, JFu, IV, JR, TKK, and MH maintained the ARPES/SARPES end stations and provided experimental support. KO, MA, and TS synthesised the measured samples. PDCK, OJC, and MSB wrote the manuscript with input and discussion from co-authors. PDCK and MSB were responsible for overall project planning and direction.Author Information. Reprints and permissions information is available at www.nature.com/reprints. The authors declare no competing financial interests. Correspondence and requests for materials should be addressed to PDCK or MSB. 30 url<#>1urlprefixURL hasan_colloquium:_2010 authorHasan, M. Z., Kane, C. L. titleColloquium: Topological insulators. journalRev. Mod. Phys. volume82, pages3045-3067 (year2010).young_dirac_2012 authorYoung, S. M. et al. titleDirac Semimetal in Three Dimensions. journalPhys. Rev. Lett. volume108, pages140405 (year2012).liu_discovery_2014 authorLiu, Z. K. et al. titleDiscovery of a Three-Dimensional Topological Dirac Semimetal, Na_3Bi. journalScience volume343, pages864-867 (year2014).wang_dirac_2012 authorWang, Z. et al. titleDirac semimetal and topological phase transitions in A_3Bi (A=Na, K, Rb). journalPhys. Rev. B. volume85, pages195320 (year2012).wang_3D_2013 authorWang, Z. et al. titleThree-dimensional Dirac semimetal and quantum transport in Cd_3As_2. journalPhys. Rev. B volume88, pages125427 (year2013). borisenko_experimental_2014 authorBorisenko, S. et al. titleExperimental Realization of a Three-Dimensional Dirac Semimetal. journalPhys. Rev. Lett volume113, pages027603 (year2014).yang_classification_2014 authorYang, B.-J., Nagaosa, N. et al. titleClassification of stable three-dimensional Dirac semimetals with nontrivial topology. journalNature Commun. volume5, pages4898 (year2014).yang_prb_2015 authorYang, B.-J., Morimoto, T., Furusaki, A. titleTopological charges of three-dimensional Dirac semimetals with rotation symmetry. journalPhys. Rev. B volume92, pages165120 (year2015).xu_discovery_2015 authorXu, S.-Y. et al. titleDiscovery of a Weyl fermion semimetal and topological Fermi arcs. journalScience volume349, pages613-617 (year2015).yang_weyl_2015 authorYang, L. X. et al. titleWeyl semimetal phase in the non-centrosymmetric compound TaAs. journalNature Phys. volume11, pages728-732 (year2015).lv_observation_2015 authorLv, B. Q. et al. titleObservation of Weyl nodes in TaAs. journalNature Phys. volume11, pages724-727 (year2015).wan_topological_2011 authorWan, X. et al. titleTopological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates. journalPhys. Rev. B volume83, pages205101 (year2011).weng_weyl_2015 authorWeng, H. et al. titleWeyl Semimetal Phase in Noncentrosymmetric Transition-Metal Monophosphides. journalPhys. Rev. X volume5, pages011029 (year2015). lv_experimental_2015 authorLv, B. Q. et al. titleExperimental Discovery of Weyl Semimetal TaAs. journalPhys. Rev. X volume5, pages031013 (year2015). borrisenko_T_Weyl authorBorisenko, S. et al. titleTime-Reversal Symmetry Breaking Type-II Weyl State in YbMnBi_2. journalarXiv:1507.04847(year2016).huang_spectroscopic_2016 authorHuang, L. et al. titleSpectroscopic evidence for a type II Weyl semimetallic state in MoTe_2. journalNature Mater.volume15, pages1155-1160 (year2016).Deng_experimental_2016 authorDeng, K. et al. titleExperimental observation of topological Fermi arcs in type-II Weyl semimetal MoTe_2. journalNature Phys.volume12, pages1105-1110 (year2016).Tamai authorTamai, A. et al. titleFermi Arcs and Their Topological Character in the Candidate Type-II Weyl Semimetal MoTe_2. journalPhys. Rev Xvolume6, pages031021 (year2016). obrien_magnetic_2016 authorO'Brien, T .E., Diez, M. and Beenakker, C. W. J. titleMagnetic Breakdown and Klein Tunneling in a Type-II Weyl Semimetal. journalPhys. Rev. Lett.volume116, pages236401 (year2016). soluyanov_type-ii_2015 authorSoluyanov, A. A. et al. titleType-II Weyl semimetals. journalNature volume527, pages495-498 (year2015).Xu_structured authorXu, Y. et al. titleStructured Weyl points in Spin-Orbit Coupled Fermionic Superfluids. journalPhys. Rev. Lett. volume115, pages265304 (year2015). mccormick_minimal_2017authorMcCormick, T .M., Kimchi, I. and Trivedi, N.titleMinimal models for topological Weyl semimetals.journalPhys. Rev. B volume95,pages075133 (year2017).chiral_anomoly authorXiong, J. et al. titleEvidence for the chiral anomaly in the Dirac semimetal Na_3Bi. journalScience volume350,pages413-416 (year2015).axial authorGooth, J. et al. titleExperimental signatures of the mixed axial–gravitational anomaly in the Weyl semimetal NbP. journalNature volume547,pages324–327 (year2017). ferreiros_anamalous_2017 authorFerreiros, Y.,Zyuzin, A. A,& Bardarson, J. H. titleAnomalous Nernst and Thermal Hall Effects in Tilted Weyl Semimetals. journalarXiv:1707.01444(year2017). Saha_anomalous_2017 authorSaha, S. & Tewari, S. titleAnomalous Nernst effect in type-II Weyl semimetals. journalarXiv:1707.04117(year2017).mccormick_semiclassical_2017 authorMcCormick, T. M., McKay, R. C. & Trivedi, N. titleThe semiclassical theory of anomalous transport in type-II topological Weyl semimetals. journalarXiv:1707.06222(year2017).zhang_topological_2009authorZhang, H. et al.titleTopological insulators in Bi_2Se_3, Bi_2Te_3 and Sb_2Te_3 with a single Dirac cone on the surface.journalNature Phys. volume5,pages438-442 (year2009). belopolski_criteria_2016authorBelopolski, I. et al.titleCriteria for Directly Detecting Topological Fermi Arcs in Weyl Semimetals.journalPhys. Rev. Lett. volume116,pages066802 (year2016).xu_spin_2014 authorXu, X., Yao, W., Xiao, D. & Heinz, T. F.titleSpin and pseudospins in layered transition metal dichalcogenides. journalNature Phys. volume10, pages343-350 (year2014). wang_electronics_2012 authorWang, Q. H., Kalantar-Zadeh, K., Kis, A., Coleman J. N. & Strano, M. S. titleElectronics and optoelectronics of two-dimensional transition metal dichalcogenides. journalNature Nano. volume7, pages669-712 (year2012). chhowalla_2013 authorChhowalla, M. et al. titleThe chemistry of two-dimensional layered transition metal dichalcogenide nanosheets. journalNature Chemistry volume5, pages263-275 (year2013). raub_occurrence_1965 authorRaub, C. J. et al. titleThe occurrence of superconductivity in sulfides, selenides, tellurides of Pt-group metals. journalJournal of Physics and Chemistry of Solids volume26, pages2051-2057 (year1965).pdte2footnote titleWe note that a very recent arXiv posting (1612.06946) also reports the observation of type-II Dirac fermions in PdTe_2, consistent with our findings.fei_nontrivial_2016 authorFei, F. et al. titleNontrivial Berry phase and type II Dirac transport in layered material PdTe_2. journalarXiv:1611.08112(year2016). yan_identification_2015 authorYan, L. et al. titleIdentification of Topological Surface State in PdTe_2 Superconductor by Angle-Resolved Photoemission Spectroscopy. journalChinese Phys. Lett. volume32, pages067303 (year2015).xu_observation_2015 authorXu, S.-Y. et al. titleObservation of Fermi arc surface states in a topological metal. journalScience volume347, pages294-298 (year2015).yi_evidence_2014 authorYi, H. et al. titleEvidence of Topological Surface State in Three-Dimensional Dirac Semimetal Cd_3As_2. journalSci. Rep.volume4, pages6106 (year2014).chang_typeii_2016 authorChang, T.-R. et al. titleType-II Topological Dirac Semimetals:Theory and Materials Prediction (VAl_3 family). journalarXiv:1606.07555, (year2016). zhang_precise_2011 authorZhang, P. et al. titleA precise method for visualizing dispersive features in image plots. journalRev. of Sci. Instrumentsvolume82, pages043712 (year2011). guo_electronic_1986 authorGuo, G. Y. & Liang, W. Y. titleThe electronic structures of platinum dichalcogenides: PtS_2 , PtSe_2 and PtTe_2. journalJ. Phys. C: Solid State Phys. volume19, pages995-1008 (year1986). yan_lorentz-violating_2016 authorYan, M. et al. titleLorentz-violating type-II Dirac fermions in transition metal dichalcogenide PtTe_2. journalarXiv:1607.03643(year2016). huang_type-ii_2016 authorHuang, H., Zhou, S., Duan, W. titleType-II Dirac fermions in the PtSe_2 class of transition metal dichalcogenides. journalPhys. Rev. B. volume94, pages121117(R) (year2016).cao_origin_2013 authorCao, H. et al. titleOrigin of the phase transition in IrTe_2: Structural modulation and local bonding instability. journalPhys. Rev. B. volume88, pages115122 (year2013). fang_structural_2013 authorFang, A. F. et al. titleStructural phase transition in IrTe_2: A combined study of optical spectroscopy and band structure calculations. journalSci. Rep.volume3, pages1153 (year2013). riley_direct_2014 authorRiley, J. M. et al. titleDirect observation of spin-polarized bulk bands in an inversion-symmetric semiconductor. journalNature Phys. volume10, pages835-839 (year2014). riley_negative_2015 authorRiley, J. M. et al. titleNegative electronic compressibility and tunable spin splitting in WSe_2. journalNature Nano. volume10, pages1043-1047 (year2015). wilson_charge-density_1974 authorWilson, J. A., Di Salvo, F. J. & Mahajan, S. titleCharge-Density Waves in Metallic, Layered, Transition-Metal Dichalcogenides. journalPhys. Rev. Lett. volume32, pages882-885 (year1974). borisenko_pseudogap_2008 authorBorisenko, S. V. et al. titlePseudogap and Charge Density Waves in Two Dimensions. journalPhys. Rev. Lett. volume100, pages196402 (year2008). yokoya_fermi_2001 authorYokoya, T. et al. titleFermi Surface Sheet-Dependent Superconductivity in 2H-NbSe_2. journalScience volume294, pages2518-2520 (year2001). bawden_spin-valley_2016 authorBawden, L. et al. titleSpin-valley locking in the normal state of a transition-metal dichalcogenide superconductor. journalNature. Commun. volume7, pages11711 (year2016). MethodsCalculations: The bulk calculations were performed within density functional theory (DFT) using Perdew-Burke-Ernzerhof exchange-correlation functional as implemented in the WIEN2K program. <cit.> Relativistic effects including spin-orbit coupling were fully taken into account. For all atoms, the muffin-tin radius R_MT was chosen such that its product with the maximum modulus of reciprocal vectors K_max become R_MT K_max=7.0. The Brillouin zone sampling of 1T (2H) structures was carried out using a 20× 20× 20 (20× 20× 10) k-mesh.Forthe surface calculations, a 100 unit tight binding supercell wasconstructed using maximally localized Wannier functions. <cit.>The p-orbitals of the the chalcogen and the d-orbitals of the transition metal atoms were chosen as the projection centres.The phase diagrams and related band structures shown in Fig. <ref> were constructed using a 12-band tight-binding model, considering nearest-neighbour p-p hoppings between the chalcogen sites in a trigonal unit cell similar to that of 1T-TMDs, but without any transition metal element. The basis set is accordingly composed of two sites, j=1 and 2, and each site contains six spin-orbital components,|p_i,j,σ⟩, where i=x,y,z and σ=↑, ↓. The hopping integrals t_ij,i' j'=⟨ p_ij|H|p_i' j'⟩ were calculated using the Salter-Koster method by choosing the appropriate values for on-site crystal field terms Δ_CFS and the two-centre bond integrals t_ii'σ and t_ii'π. <cit.> For simplicity, the effect of spin-orbit interaction was approximated by only considering the on-site contribution H_so=λL·S, where L and S are orbital and spin angular momentum operators, respectively. Considering the hopping paths shown in Fig <ref>(a), each band structure calculation required setting eight hopping parameterst_kσ, t_kπ where k=1-4 as well as Δ_CFS and λ. We fix t_1σ= t_1π=t_2σ= t_2π=1.0, the crystal-field splitting, Δ_CFS=1, and the spin-orbit coupling λ=0.3. Intra-unit inter-layer hopping is assumed to be of π-type only (t_3π [t_3σ=0]). The other parameter were varied to produce the representative band structures shown in Fig. <ref>(c). Inter-unit cell hopping is assumed to be dominated by p_z orbitals and is therefore predominantly of σ-type (t_4σ), although we also consider the effect of finite π-type interactions between neighbouring unit cells (t_4π≪t_4σ). ARPES: ARPES measurements of PdTe_2 and PtSe_2 were performed at the I05 beamline of Diamond Light Source, UK, and most spin-integrated WSe_2 measurements at the CASSIOPEE beamline of Synchrotron SOLEIL, France. Additional ARPES measurements of WSe_2 were taken at the APE beamline of Elettra Syncrotrone Trieste, Italy, along with the majority of the spin-resolved ARPES measurements. Additional spin-resolved measurements of PdTe_2 were obtained from the I3 beamline of MAX IV Laboratory, Sweden.High-quality single crystal samples, grown by chemical vapour transport, were cleaved in situ at temperatures ranging between 9-15K. Measurements were performed using either p-polarised (PdTe_2, PtSe_2, WSe_2), or circularly polarised (WSe_2) light, and using photon energies in the range hν=24-132 eV. Scienta R4000 hemispherical analysers, with a vertical entrance slit and the light incident in the horizontal plane, were used at Diamond and SOLEIL. A VG-Scienta DA30 analyser (Elettra), fitted with two very low energy electron diffraction (VLEED) based spin polarimiters <cit.>, was utilised for the majority of the spin-resolved measurements along three momentum directions, while additional measurements were performed using a mini-Mott setup on a Scienta R4000 analyser (Max IV). The finite spin-detection efficiency was corrected using detector-dependent Sherman functions ranging between S = 0.17 ± 0.03 and S = 0.43 ± 0.03 as determined by fitting the spin-polarisation of reference measurements of the Au(111) Rashba-split surface state for each experimental set-up utilised. Spin-resolved EDCs were determined according toI_i^↑,↓=I_i^tot (1± P_i )/2,where i={x,y,z}, I_i^tot=(I_i^++I_i^-)and I_i^± is the measured intensity for a positively or negatively magnetised detector, corrected by a relative efficiency calibration. The final spin polarisation is defined as follows: P_i=I_i^+-I_i^-/S(I_i^++I_i^-),where S is the relevant Sherman function for the detector in use.Quantitative spin-polarisation magnitudes were determined from the relative areas of Lorentzian peak fits to energy distribution curves (EDCs) originating from oppositely magnetised detectors. A Shirley background and Gaussian broadening were included in this analysis.To determine the PdTe_2 k_z dispersion from photon-energy-dependent ARPES, we employed a free electron final state modelk_z=√(2m_e/ħ^2) (V_0+E_kcos^2θ )^1/2where θ is the in-plane emission angle and V_0 is the inner potential. We find best agreement to density-functional theory calculations taking an inner potential of 16 eV and a c-axis lattice constant of 5.13 Å. Data availability statement: The data that underpins the findings of this study are available at http://dx.doi.org/10.17630/27a2dc90-470f-4e69-be1e-5ebb072db739. 10 url<#>1urlprefixURLwien2k authorBalaha, P. et al. titleWIEN2K package, Version 13.1 (year2013). souza authorSouza, I. et al. titleMaximally localized Wannier functions for entangled energy bands. journalPhys. Rev. B. volume65, pages035109 (year2001). mostofi authorMostofi, A. A. et al. titleWannier90: a tool for obtaining maximally localized Wannier functions. journalComp. Phys. Commun. volume178, pages685-699 (year2008). kunes authorKunes, J. et al. titleWIEN2WANNIER: from linearized augmented plane waves to maximally localized Wannier functions. journalComp. Phys. Commun. volume181, pages1888-1895 (year2010). Salter-Koster authorSalter, J. C. & Koster G. F. titleSimplified LCAO method for the Periodic Potential Problem. journalPhys. Rev. volume94, pages1498-1524 (year1954). bigi_very_2017 authorBigi, C. et al. titleVery efficient spin polarization analysis (VESPA): new exchange scattering-based setup for spin-resolved ARPES at APE-NFFA beamline at Elettra. journalJ. Synchrotron Rad. volume24, pages750-756 (year2017).
http://arxiv.org/abs/1702.08177v2
{ "authors": [ "M. S. Bahramy", "O. J. Clark", "B. -J. Yang", "J. Feng", "L. Bawden", "J. M. Riley", "I. Marković", "F. Mazzola", "V. Sunko", "D. Biswas", "S. P. Cooil", "M. Jorge", "J. W. Wells", "M. Leandersson", "T. Balasubramanian", "J. Fujii", "I. Vobornik", "J. E. Rault", "T. K. Kim", "M. Hoesch", "K. Okawa", "M. Asakawa", "T. Sasagawa", "T. Eknapakul", "W. Meevasana", "P. D. C. King" ], "categories": [ "cond-mat.mtrl-sci", "cond-mat.supr-con" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170227080936", "title": "Ubiquitous formation of bulk Dirac cones and topological surface states from a single orbital manifold in transition-metal dichalcogenides" }
firstpage–lastpage Oscillatory-like Expansion of a Fermionic Superfluid Jian-Wei Pan December 30, 2023 ====================================================Numerical models for the atmospheres of magnetic ApBp stars have in the past dealt only with centred dipole magnetic field geometries. These models include atomic diffusion that stratifies the abundances of metals according to the local magnetic field strength and the direction with respect to the surface normal. The magnetic variations with rotational phase of most well observed stars however reveal that this assumption is far too simplistic. In this work we establish for the first time a three-dimensional (3D) model with abundance stratifications arising from atomic diffusion of 16 metals, adopting a non-axisymmetric magnetic field geometry inspired by the configuration derived for a real ApBp star. We find that the chemical elements are distributed in complex patterns in all three dimensions, far from the simple rings that have been proposed as the dominant abundance structures from calculations that assume a perfectly centred dipolar magnetic geometry.atomic diffusion – stars: abundances – stars: chemically peculiar – magnetic fields, stars : magnetic fields § INTRODUCTION Our current understanding of magnetic ApBp star atmospheres – in which the vertical and horizontal distributions of chemical abundances are not homogeneous – has recently been discussed in detail by <cit.> within the framework of atomic diffusion theory. Numerical results on diffusion in ApBp star atmospheres have been applied to idealised stars, invariably under the assumption of a magnetic geometry based on a simple centred dipole <cit.>. It has been shown that when the stratification process has reached equilibrium – i.e. the particle flux is zero – many metals (but not all of them) are expected to exhibit large overabundances around the “magnetic equator” (where the field lines are tangent to the surface, 90±5), often in layers above logτ≈-2.0. The same models predict mild abundance anomalies in the polar regions, and horizontally homogeneous stratifications of elements in deep layers where atomic diffusion is no longer sensitive to the magnetic field. Note that for atmospheres of non-magnetic ApBp stars (mainly HgMn stars), abundance stratifications depend only on depth; they are therefore uniformly distributed over the surface <cit.>, except for those cases where very weak magnetic fields (of a few Gauss) exist. For chemical elements with low cosmic abundance, the latter will help atomic diffusion to form field-dependent overabundance clouds at very high altitude.According to the numerical models mentioned above, abundance rings or belts around magnetic ApBp stars should be commonly observed. This is not the case <cit.>. Several reasons may be invoked to explain the lack of agreement between the surface abundance distributions of a given star obtained through Zeeman Doppler mapping (ZDM), and recent numerical diffusion models. For instance, will the application of present-day ZDM algorithms to spectropolarimetric data always (or ever) lead to the recovery of the predicted abundances rings? This problem has recently been addressed by <cit.>. On the other hand, <cit.> has listed a number of improvements that have yet to be included in numerical models in order to fully describe the build-up of abundance stratifications in individual stars. Among these improvements we find the need to go beyond the strictly dipolar geometry that has been used until now. This is the main purpose of our present work.We have computed a grid of 81 plane-parallel model atmospheres (T_ eff=10 000 K, logg=4.0), adopting various magnetic field strengths and orientations; the field-dependent equilibrium abundance stratifications result from the simultaneous atomic diffusion of 16 metals (Sec. <ref>). With newly developed tools we are now in a position to establish from this grid of models the 3D distribution of any of the 16 elements for a given magnetic geometry (Sec. <ref>). In a final step, we have computed the distribution of two metals (Cr and Fe) for a realistic stellar magnetic configuration (Sec. <ref>). The results are shown and discussed in Sec. <ref> and Sec. <ref>.§ MODELLING THREE-DIMENSIONAL ABUNDANCE DISTRIBUTIONSBecause the geometrical thickness of the atmosphere is very small compared to the stellar radius, the vertical timescales of abundance stratification processes due to atomic diffusion are much shorter (by at least 4 orders of magnitude) than the timescales for the horizontal migration of elements over the stellar surface <cit.>. Since we are considering static atmospheres, abundance stratifications are due solely to vertical atomic diffusion; it is the local effect of the magnetic field orientation and strength on the vertical component of the diffusion velocity that is responsible for the horizontal abundance structures over the stellar surface. We therefore assume, as in <cit.>, that the surface of the star is made up of a juxtaposition of independent facets, each of them to be calculated in the approximation of a plane-parallel atmosphere. The facets differ from each other by the magnetic field strength and orientation, and by the ensuing abundance stratifications (the magnetic field is assumed to be constant with depth). There is thus a slight difference to be found between the atmospheric model used for a given facet and the model for an adjacent facet. Effective temperature and gravity are taken to be identical for all facets; potential problems and inconsistencies that could arise from this simplified 1D treatment of the local stellar atmospheres have been discussed by <cit.>. §.§ Stellar atmospheres and magnetic-field dependent stratificationsThe atmospheric model for a given facet is obtained by interpolation in a grid of models (see Sec. <ref>). The models making up this grid have been computed as described in <cit.>: they result from calculations of equilibrium stratifications with the help of the CaratStrat code, i.e. the vertical abundance distributions are self-consistent with the atmospheric structure computed with Atlas12 (, ). The grid used in this work is composed of 81 models with T_ eff=10 000 K and logg=4.0 and field strengths of 0, 1000, 5000, 5500, 6250, 7500, 10000, 11000, 12500, 15000, 20000 G. Except in the 0 G case, the models have been established for the following angles (with respect to the vertical): 0, 60, 75, 80, 83, 86, 88, 90. As in <cit.>, 16 metals are allowed to diffuse simultaneously (Mg, Al, Si, P, Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga and Hg). This grid is 4 times larger than the one used by <cit.> for a given effective temperature. The parallel computations required about 43000 hours of equivalent monoprocessor time on the BullX DLC supercomputer at CINES[Centre Informatique National de l'Enseignement Supérieur (Montpellier, France), see:https://www.cines.fr/calcul/materiels/occigen/.].As stressed by <cit.>, the diffusion velocity is particularly sensitive to the magnetic field orientation at and very near to the “magnetic equator” (90±5). One therefore expects strong abundance contrasts in the vicinity of these horizontal fields, much less so for magnetic field angles near 0 or 180 (the magnetic poles). For this reason, the grid density increases for angles > 75. Note that velocities are identical for 0 and 180. §.§ Establishing the 3D distributions of metalsIn order to obtain 3D abundance distributions over the entire atmosphere of the star, we first determine the set of facets necessary to obtain a satisfactory spatial resolution. The facets have the shapes and distribution of the area elements <cit.>. Field strengths and angles are specified at the centre of each area element.In the present study, the model atmospheres are composed of N = 72 layers. We divide each model into a certain number of slabs. We take for instance a slab of l layers (from layer number l_1 to l_2, with identical numbers for all the models) and for each chemical element we compute the average abundance (with [H] = 12) over the l layers of the slab – no weighting is applied. For our grid of 81 models we thus have a table with 81 rows and 3 columns: the field strength [Gauss], the field angle [], and the mean abundance in the slab. In a next step we create a matrix consisting of N_ B rows and N_ A columns. The rows represent equally spaced field strengths, the columns equally spaced field angles, and the matrix element gives the abundance. This matrix is obtained by Voronoi interpolation applied to the 81x3 table. Here we have chosen N_ B = 400, corresponding to field strengths from 0 to 20000 G in steps of 50 G, and N_ A = 180, corresponding to angles from 0 to 90 in steps of 0.5. The abundance value for a given element in each slab of a given facet is then obtained by taking the matrix element closest to the field parameters of the facet (there is one 400x180 matrix per slab and per chemical element). Calculations and visualisation of the maps have been carried out with the software ^ Igor Pro (v7)[See https://www.wavemetrics.com] with its built-in procedures for interpolation.§ THE MAGNETIC GEOMETRYTo be as realistic as possible, we choose the magnetic configuration of the well studied Ap star HD 154708 whose field geometry is definitely non-axisymmetric <cit.>. The field structure can be approximated by means of the eccentric, tilted oblique rotator introduced by <cit.> which is characterised by a dipole at a certain distance from the centre of the star, with its axis not going through the centre. In the case of HD 154708 mean field modulus, mean longitudinal field and detailed intensity profiles of two Si lines are satisfactorily predicted with this particular model. HD 154708 is by no means the only star with a clearly visible phase shift between the respective variations in field modulus and in the longitudinal field. Let us mention <cit.> and <cit.> who have successfully modelled the magnetic geometries of HD 126515 <cit.> and of HD 137909 <cit.>. The star HD 18078 <cit.> constitutes yet another interesting example. It is also important to realise that detailed Zeeman Doppler mapping almost invariably results in magnetic and abundance maps devoid of any simple symmetry: take for example the magnetic field map of 53 Cam by <cit.>, and the map of α^2 CVn by <cit.>. Magnetic and abundance maps of HD 75049 <cit.>, of HD 32633 <cit.>, and of HD 125248 <cit.> all lack any discernible symmetries.There is thus no reason to believe that the field geometry of HD 154708 were untypical for ApBp stars. Being as yet unable to model the stratifications and their build-up in the atmospheres of specific stars <cit.>, we thus felt free to play with the geometry of HD 154708. We have scaled the field strength to obtain abundance stratifications of sufficient contrast, while remaining inside the domain of the field parameters of our grid of models. The field strength ranges from about 5 kG to about 17 kG which are quite common values for a number of observed magnetic ApBp stars. The geometry underlying the results presented in Sec. <ref> is displayed in Fig. <ref> (field strength) and in Fig. <ref> (field angle with respect to the vertical). The total number of facets used in this work for the description of the stellar surface is 7158.§ RESULTSTo stay within the framework of our recently published studies, we have again chosen a main sequence atmosphere with T_ eff=10 000 K, logg=4.0. A small part of the grid presented in Sec. <ref> (prior to its extension to a much larger range in magnetic field parameters) was already used by <cit.>. The reader can peruse this paper for complementary descriptions/discussions of the (vertical) abundance stratifications. For the sake of conciseness, here we only present and discuss the equilibrium 3D distributions of Cr and Fe, adopting the magnetic field described in Sec. <ref>.In Fig. <ref> (chromium) and Fig. <ref> (iron) we show a tomographic view of the abundance distributions over the whole star as a function of depth. The 6 surface projections in these figures correspond to the abundances inside 6 slabs (i.e. 6 depths ranges) as defined in Sec. <ref>. We have mentioned previously that in our grid, the models are composed of 72 layers (-4.5≤logτ_5000≤ 2.0), but we have chosen to look only at 6 contiguous slabs in the usual line forming region (-3.0≤logτ_5000≤ 0.0); each slab has an approximate optical thickness of 0.5 dex. Indeed, as seen in Fig. <ref> and Fig. <ref>, 6 slabs provide sufficient resolution for visualisation of the vertical dimension. The optical depths (τ_5000) used to label the slabs are the ones taken from the model computed for solar homogeneous abundances (the first converged model in a run); this initial model is common to all the 81 models of the grid. We have verified that the τ_5000 values of each layer of the final 81 equilibrium models differ by at most ±0.2 dex from those of the initial model.As expected, the uppermost slab (-3.0≤logτ_5000≤ -2.5) exhibits a finely shaped equatorial belt of overabundances for both elements. This is because the diffusion velocity is extremely sensitive to the magnetic field orientation at small optical depths. However, the abundances inside the rings are not uniform; for instance, even in the uppermost slab the overabundances in the left part of the plot are about 0.8 dex lower than in the right part. For this reason, we will henceforth speak of a quasi-ring rather than of a ring. In addition, the abundance distributions change drastically as one goes deeper: the overabundant equatorial belt changes to large spots below logτ_5000= -2.0, parts of the belt become underabundant, and overabundances appear near the magnetic poles. We want to draw the attention of the reader to the fact that the relation between abundances and the colour scale differs from one slab to the other. The colour scale being the same for all the slabs, it may happen that from simple visual inspection the effects of abundance inhomogeneities on the emergent line profiles could be overestimated for the highest slabs. Indeed, the highest slabs contribute much less to the line profiles than the deepest slabs.What makes the 3D maps shown in Figs. <ref> and <ref> so complex? At different depths, abundances are seen to vary over the star in different ways at different depths, giving rise to apparent abundance spots and pseudo-rings devoid of any symmetry. This is essentially due to the non-axisymmetric field structure assumed; the curve tracing the location of the horizontal field does not coincide with any curve of constant field strength, contrary to what happens in an axisymmetric field geometry. In the latter case, stratifications for a given magnetic latitude would be strictly the same for all magnetic longitudes, stratifications only change along the magnetic meridians.§ DISCUSSIONIn order to achieve better predictions of 3D abundance stratifications in magnetic ApBp stars, we have computed a grid of 81 atmospheric models (T_ eff=10 000 K, logg=4.0) with stratified abundances resulting from the simultaneous diffusion of 16 metals. These 81 models cover the range of magnetic strengths between 0 G and 20 kG, with a grid of magnetic inclination angles that reflects the sensitivity of the stratification profiles to the magnetic field. Based on this grid, we have modelled 3D distributions of chemical elements for a non-axisymmetric magnetic field geometry instead of the usual perfectly centred axisymmetric dipolar fields. In this paper, we have adopted a magnetic field geometry inspired by the published model of a real star (Fig. <ref> and Fig. <ref>). Only two chemical elements (Cr and Fe, Fig. <ref> and Fig. <ref> respectively) are discussed. The results for 14 other chemical elements are available in our archives, but we do not deem it necessary to include them in this discussion. Our 3D computations could easily be extended to any other magnetic field geometry, provided that a map is available with modulus and orientation of the field vectors.It is shown in Sec. <ref> that above logτ_5000=-2.5 for Cr, and -2.0 for Fe, an overabundant quasi-ring exists in places where the magnetic lines are inclined by 90±5 with respect to the vertical. This is consistent with previous calculations for axisymmetric dipoles. In deeper layers however this simple structure clearly changes: the quasi-ring exhibits different vertical and horizontal extensions, depending on longitude and latitude. Parts of the quasi-ring may become underabundant, large structures or “spots” appear close to some portions of the quasi-ring and at the magnetic poles. It should be noted that rings do not necessarily exist deeper than logτ_5000=-3  for all elements. We have found for instance that this is the case for Zn (not discussed in this paper). Most metals can develop such quasi-rings if one goes up high enough in the atmosphere. However, an overabundant ring, if formed above say logτ_5000≈-5 would hardly affect line profiles (except for elements with very small solar abundance like rare earths, or Hg) and will remain undetectable with existing instruments.For the moment, the existence of rings or quasi-rings as discussed above has only been established for equilibrium solutions to the diffusion problem – but see the discussion in <cit.> for the limits of this hypothesis. We do not yet know if such structures also appear so clearly in time-dependent diffusion calculations <cit.>. It cannot be excluded that various parts of a ring or a quasi-ring appear on different timescales, making it potentially difficult to find a complete ring at a given age of the star.Concerning our results for Cr and Fe, it is difficult to estimate the effect of the 3D abundance structure on the emergent line profile. The profiles will certainly be different from those expected for a globally constant stratification or for vertically constant but horizontally variable abundances; the effect will depend on atomic properties, wavelength, Zeeman pattern, depth of line formation ... Without extensive simulations it is impossible to predict how a technique like Zeeman Doppler mapping (ZDM) with its assumption of vertically constant abundances deals with this physical reality. In order to clarify the issue, we plan to extend the capabilities of Cossam <cit.>, making it possible to approximate an ApBp atmosphere consisting of 7158 local atmospheres with individual elemental stratifications self-consistent with the local magnetic field, and to obtain full IQUV Stokes spectra.On the road towards improved modelling of element distributions in the atmospheres of main-sequence chemically peculiar stars, we plan to have a look again at time-dependent diffusion <cit.>, but now including the effect of mass-loss. Indeed, it is known that high mass-loss rates for T_ eff 16 000 K do prevent atomic diffusion from stratifying abundances in atmospheres; they thus determine the upper limit in effective temperature of the ApBp star phenomenon (including non-magnetic atmospheres). On the other hand, diffusion models for cooler stars suggest that a mass loss of about 10^-14-10^-13 solar mass per year acts in conjunction with atomic diffusion to yield the abundance anomalies observed in Am stars <cit.>. Therefore mass-loss is certainly an important process to be included in numerical models. In magnetic atmospheres it is not unreasonable to assume anisotropic winds <cit.>, a scenario which can be expected to lead to some additional differences in the build-up of abundance stratifications in magnetic polar regions as compared to magnetic equatorial regions.§ ACKNOWLEDGEMENTS All codes that have been used to compute the grid of models have been compiled with the GNAT GPL Edition of the Ada compiler provided by AdaCore; this valuable contribution to scientific computing is greatly appreciated. This work has been supported by the Observatoire de Paris-Meudon in the framework of Actions Fédératrices Etoiles. This work was partly performed using HPC resources from GENCI-CINES (grants c2015045021, c2016045021). The authors want to thank Dr. Günther Wuchterl, head of the “Verein Kuffner-Sternwarte”, for the hospitality offered.mnras
http://arxiv.org/abs/1702.08322v1
{ "authors": [ "G. Alecian", "M. J. Stift" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170227151754", "title": "Three-dimensional abundance distributions in ApBp star atmospheres: non-axisymmetric magnetic geometry" }
New Mexico State University dederiej@nmsu.edu New Mexico State University jasonj@nmsu.edu A possible mechanism for driving oscillations in hot giant planets Jason Jackiewicz December 30, 2023 ==================================================================§ ABSTRACT The κ-mechanism has been successful in explaining the origin of observed oscillations of many types of “classical” pulsating variable stars. Here we examine quantitatively if that same process is prominent enoughto excite the potential global oscillations within Jupiter, whose energy flux is powered by gravitational collapse rather than nuclear fusion. Additionally, we examine whether external radiative forcing, i.e. starlight, could be a driver for global oscillations inhot Jupiters orbiting various main-sequence stars at defined orbital semimajor axes. Using planetary models generated by the Modules for Experiments in Stellar Astrophysics (MESA) and nonadiabatic oscillation calculations, we confirm that Jovian oscillations cannot be driven via the κ-mechanism. However, we do show that in hot Jupiters oscillations can likely be excited via the suppression of radiative cooling due to external radiation given a large enough stellar flux and the absence of a significant oscillatory damping zone within the planet. This trend seems to not be dependent on the planetary mass. In future observations we can thus expect that such planets may be pulsating, thereby giving greater insight into the internal structureof these bodies. § INTRODUCTION Since the advent of helioseismology and the detection of global solar modes,it has been postulated that gas-giant planets could also exhibit similar oscillations.Theoretical work has been done investigating the inherent nature of these oscillatory modes in the giant planets <cit.>, and indeed, they may even have been recently observed. <cit.> demonstrated a preliminary detection of Jupiter's global acoustic oscillations, and <cit.>have measured spiral density structures in Saturn's rings caused by Saturnian surface-gravity waves. However, the driving mechanism of these oscillations remains a challenge to understand. <cit.> demonstrated that the energy from turbulent convection could drive the observed solar oscillations, yet a similar approach for Jupiter and Saturn reveals that too little energy is available from turbulent convection to be responsible for driving their global oscillations <cit.>. The ratio of velocity of the convective flux to the sound speed (Mach number) gives an indication as to the energy in a driving mechanism. In Jupiter, the Mach number relative to the Sun is much lower, resulting in oscillatory surface amplitudes of at least three orders of magnitude less than the Sun <cit.>. Therefore, a different source mechanism must be at least partially responsible for exciting oscillations to detectable amplitudes. Several other possible sources can be considered that could potentially drive Jovian global oscillations. These include moist convection in the upper atmosphere, ortho- to para-hydrogen conversion, and helium rain <cit.>. Yet another source could be the well-understood κ-mechanism, at work in certain classes of pulsating variable stars <cit.>. Some of the pioneering work done to develop this theory was conducted by <cit.> and <cit.>, the latter resolving more nuanced questions such as hydrogen ionization being responsible for the phase-retardation in the pulsations.While the effectiveness of the κ-mechanism typically requires (partial) ionization regions (e.g., hydrogen and helium) within a star to successfully drive oscillations, we explore ifin the Jovian case, this mechanism could operate at some level. The ongoing contraction of Jupiter releases non-negligible internal radiation (radiating towards the surface) which could be absorbed by any opacity features in Jupiter's atmosphere, thereby having a similar effect as hydrogen or helium ionization zones in pulsating variable stars.We study this possibility using interior models of Jupiter and nonadiabatic pulsation analysis. We then apply the same strategy to look at gas-giant planetary models that are orbiting close to a host star, so-called hot Jupiters. The goal is to understand if the high levels of irradiation impact any potential pulsation driving. In Section <ref> we review the physical description of the excitation of nonadiabatic pulsations in the context of the κ mechanism.Section <ref> explains the process by which our planetary models are created and calibrated. Section <ref> details our analysis ofJovian and hot Jupiter oscillations, and we end with concluding remarks in Section <ref>. § THE KAPPA MECHANISM Within a star (or planet), hydrostatic disequilibrium is an imbalance between the gravitational force radially inward and the hydrostatic pressure radially outward. This is an unstable state, causing the star or planet to attempt to re-establish hydrostatic equilibrium.Consider a small perturbationin the object in some mass shell.Assume that an imbalance between pressure and gravity causes the star to collapse a small amount. If this collapse occurs in one or more ionization regions of the star, the rise in density and temperature results in anincrease in the radiative opacity. Once maximum compression has been reached, the temperature of the compressed region remains constant, yet the increased opacity causes increased photonic heating. The extra heat causes an increase in entropy which forces the gas to now expand isothermally. It is this heating by photons that continues to increase the pressure of the gas while the density now begins to decrease. This results in a phase retardation between the pressure and density. As the expansion proceeds, the effects of photonic heating become negligible and the gas resumes expanding adiabatically. Due to the excess entropy, however, once expansion has finished, the formerly compressed gas now has a larger volume than when it originally began compressing.At maximum expansion, the gas cools to the surrounding temperature and recompresses isothermally to its original volume. Compressive overshoot occurs and the process likely repeats. For a closed cycle, the entropy must return to its original value. In order for this to be possible, either the gas must spontaneously heat up (which does not occur) or the excess heat must be converted to work. It is this work that is then available to driveoscillations in astar. Thus, the emergent luminosity that normally wouldescape the layer easily, because of the fact thata temperature and density increase upon compression would cause a reduction in the opacity,instead results in partially ionizingthe gas, increasing opacity, andcontributing a net positive energy into the cycle at maximum compression. This is the principle driving behind the κ-mechanism. To quantify the amount of workavailable to drive oscillations, the work function is the mathematical expression that describes the energy transfer between oscillations and the medium.There are many excellent derivations of the work integral approach<cit.>, to which we refer the reader for details. For our purposes, we consider the expression <cit.> <dW/dt>= [∫_0^M(Γ_3 -1)(δρ/ρ)^*(δϵ - d/dmδ L) ]dm. Hereindicates the real part of a complex number and * represents its complex conjugate. Γ_3 is one of the adiabatic exponents relating temperature and density, ρ is density, ϵ is the rate of energy generation (negligible for planets), and L is luminosity. Lagrangian perturbations to these quantities due to oscillations are denoted with a δ. Equation (<ref>)describes the power supplied to or subtracted from an oscillation averaged over one pulsation cycle by the surrounding gas. If the total work available is positive, this energy can be transferred non-adiabatically to driving (unstable) oscillations. Otherwise, the oscillations are stable and damped. For our purposes, we will be exploring the conditions necessary for a positive work integral, ignoring perturbations to the energy generation rate, which are negligible. In particular, for the modes of interest, we will find that Γ_3-1 is always greater than zero and that density perturbations are negative. Therefore, positive luminosity perturbations induced by a disturbance can provide an overall positive work function. If an opacity effect is responsible for the positive perturbation, it is the κ-mechanism. § MODELS §.§ Calibration To study the stability of pulsations, we developed a code that computes the work function in Eq. (<ref>) from theprofiles of 1D stellar and planetary models and their associated perturbations. For the 1D models, we employ the MESA (Modules for Experiments in Stellar Astrophysics) stellar evolution code <cit.>. For our purposes, MESA uses the calculations of <cit.> to implement opacities in the low-temperature regimes necessary for gas giants. These modelsare then used as input for the GYRE suite of stellar oscillation codes <cit.>, which utilizes a multiple shooting scheme to solve the non-adiabatic oscillation differential equations to produce a spectrum of eigenfrequencies, eigenfunctions, and perturbed quantities. These tools were first tested by replicating the differential work functions of models for two classes of well-studied stars that exhibit pulsations via the κ-mechanism:β Cepheid stars andδ Scuti stars. More specifically, we calibrate by trying to match the work function in Figure 2 of <cit.>, particularly the l=1, p1 (n=1) and p4 (n=4) modes for the β Cep star and the l=1, p1 and p7 (n=7) modes for the δ Sct star. l is the angular degree and n is the radial order of the acoustic oscillations. Each stellar model is created using the parameters listed in <cit.>, and the frequencies of the modes calculated with MESA and GYRE are within a few percent of those published values (β Cep: 61.55 μHz & 106.18 μHz for p1 and p4, respectively; δ Scu: 170.66 μHz & 384.28 μHz for p1 and p7, respectively).The plots of the differential work function given by Eq. (<ref>) are shown inFig. <ref> for each star andare nearly identical to those found in <cit.>. Oscillations are excited (instability) when the work function is greater than zero. Thus, in Fig. <ref>, if the area under the curve of the differential work is positive, that particular mode is excited, otherwise it is damped. Positive peaks correspond to driving regions, while negative peaks are damping regions. Therefore, in the β Cep star, the p1 (p4) mode is unstable (stable), and likewise, in the δ Sct star, the p1 (p7) mode is unstable (stable), which matches the results in <cit.>. It must be noted that a positive work functiondoes not necessarily mean that a mode is driven. It may be excited, but it must also have a positive growth rate or else it will be quickly damped out. A more detailed discussion of mode growth rates is given in Section <ref>. Finally, the driving regions must also be near the surface, as perturbations deep in the stellar interior must be exceedingly large to overcome the thermal inertia of the surrounding medium. §.§ Jupiter ModelsTo study the behavior of any non-adiabatic oscillations excited by the κ-mechanism within Jupiter, we use MESA following this prescription: * A gas giant planet is created with a radius of 1.2 - 2 Jupiter radii with mass fractions X=0.74, Y=0.24, Z=0.02. It has a mass of 1 Jupiter mass minus the mass of a currently non-existent core. The model is then evolved for 1000 years to allow for relaxation of the internal equation of state.* A core with an average density of 10 g cm^-3 and the specified mass needed to bring the planet total to one Jupiter mass, is slowly added over 2000 years to allow the equation of state to relax once more.* The model is evolved over 4.5 Gyr, allowing the interplay between gravity and internal pressure to evolve the equation of state. During this time, the surface of the planet may be irradiated with some specified bolometric solar flux, penetrating to a specified maximum column depth. At the end of the evolution, the radius of the planet has slightly decreased due to gravitational collapse.* Each model is then fed into GYRE where it is linearly scanned for oscillations between 0.1 and 5 mHz for the l=0,1,2 angular degrees and radial orders n=1-10. The output from both GYRE and MESA includes all the necessary components of Eq. (<ref>). It can then be determined which modes, if any, are potentially excited within Jupiter. Table <ref> lists six particular Jupiter models that are explored. It lists whether the planet is irradiated with ∼ 10^9 erg cm^-2 s^-1 (High) or 5.03× 10^4 erg cm^-2 s^-1 (Solar) of irradiation, its core mass, and its final radius after 4.5 Gyr. Each model is irradiated to a column depth of 300 cm^2 g^-1 (≈ 0.9967 R_J). How far the flux penetrates into the Jovian atmosphere is unknown, hence the column depth of irradiation is somewhat arbitrary. We ran an additional 810 models to investigate the dependence of mode excitation on the column depth of irradiation. By varying the column depth throughout these models, we found that even for various flux values (greater than 10^9 erg cm^-2 s^-1), oscillations can be excited as long as the column depth exceeds 50 cm^-2 g^-1.For the set of models in Table <ref>, 300 cm^2 g^-1 corresponds to a pressure depth of ∼ 1.5 bars (the surface is defined at ∼ 0.3 bars). <cit.> cite observations to a depth of ∼ 3 bars of pressure for infrared observations. If IR light can be observed radiating from the 3 bar level we assume it can penetrate to the 3 bar level (or a column depth greater than 600 cm^2 g^-1). Since shorter wavelength light will be more readily absorbed, we split the difference between our lower and upper limits and select 300 cm^2 g^-1. Therefore, our estimate of ∼ 1.5 bars may be somewhat conservative. Finally, it is important to note that these are one-dimensional models and thus assume spherical symmetry. Therefore, we must also assume these are rapidly rotating (i.e. on the order of the rotation periods of Jupiter and Saturn) planets such that any consequences of stellar radiation only penetrating half of the planet's surface at a given time can be ignored. However, this may not be a large concern as a tidally locked planet with the driving region always facing the host star could still experience global oscillations just as how Jupiter oscillates due to (probable) excitations of localized moist convective storms. The high amount of irradiation applied to models A1, B1, and C1 is about five orders of magnitude larger than the solar flux at Jupiter.We had initially made a rather trivial mistake implementing an incorrect amount of flux in the models and subsequently found surprising results (explained in Section <ref>). We thus decided to keep these 3 models for the analysis (even though they don't represent Jupiter proper), and furthermore, were motivated to consider themore realistic cases ofhot Jupiters orbiting very close to their host star. §.§ Exoplanetary Hot Jupiter Models Wegenerated a set ofexoplanet hot Jupiter models that are created using the same method as described above, with a few differences. Table <ref> lists the subset of main-sequence stellar models selected as host stars for these planets, their masses, their effective temperatures, their bolometric luminosities and their ages. The stellar ages are determined by running MESA main-sequence models until 10% of their core hydrogen abundance is depleted, or 3 Gyr, whichever comes first. The planetary models are then evolved to the same age as the corresponding star. Representative planetary mass and orbital semimajor axis ranges are taken from NASA's exoplanet archive.[<http://exoplanetarchive.ipac.caltech.edu/>] We created a parameter space of masses ranging from 1 to 30 Jupiter masses and orbital semimajor axes of 0.01 to 2 AU. The irradiation flux received by each planet is then the bolometric flux at the particular semimajor axis calculated from a stellar blackbody given by the host star's effective temperature. Each planet model is irradiated to a column depth of 300 cm^2 g^-1 over its entire lifetime.The models have a core mass of 10 Earth masses with average core density of 10 g cm^-3. We compute planetary models for 9 different host star spectral types with 11 different planet masses at 12 different semimajor axes for a total of 1,188 models. We find that varying the column depth penetration of radiation has negligible effect on the results presented in subsequent sections, as long as it is greater than 100 cm^2 g^-1. § ANALYSIS §.§ Jupiter oscillationsWe carry out a nonadiabatic oscillation analysis for the Jupitermodels. As one might expect <cit.>, there is not as large a source of internal energy in Jovian planets as there is in stars, and thus we do not find thatthe κ-mechanism (or any other mechanism considered here) excites modes within the nominal Jupiter models. All calculated modes in models A2, B2, and C2 are stable and are thus not excited. Frequencies range between 0.129 mHz and 1.74 mHz depending on the l and n values. The differential work function of the models is virtually zero throughout Jupiter except for the outermost ∼ 1% of its radius. In the outermostlayers, there is a strong negative feature, that is, a strong damping region, that contributes to energy lost over an oscillatory cycle. Therefore, these modes are not excited. The introduction of the significant (∼ 10^9 erg cm^-2 s^-1) solar irradiation throughout Jupiter's evolution, however, results in excited oscillations. In models A1, B1 & C1, several of the low radial order modes are found to be unstable. For example, model A1 (no core) revealed thatmodes l=0, n=1, & l=2, n=1 are excited (recall that angular degrees greater than 2 and radial orders greater than 10 are untested). The reason the oscillations are excited in the presence of extreme stellar radiation is due to a large luminosity perturbation that occurs at about R=0.993 R_J, a direct consquence of the strong irradiation. A more detailed discussion as to what is happening is given in the following sections for the case of the hot Jupiter models. §.§ Hot Jupiter oscillations The highly-irradiated Jupiter models that demonstrate excited modes motivated us to exploregas giant planets orbiting very close to their host stars. The parameter space described in Section <ref> reveals some interesting trends regarding which planets exhibit mode excitation.Figure <ref> shows the models for which unstable modes are found in terms of orbital distance and host-star effective temperature. We observe a verywell-defined region of excitation favoringshort-period planets or hot host stars. Regardless of planetary mass, we find that approximately ∼ 10^9 erg cm^-2 s^-1 of flux are required for oscillations to occur. That flux or higher does not guarantee the existence of oscillations, but it is a requirement for them to be present. Thus, the separation line in Fig. <ref> is given byd = √(σ R_*^2T_ eff^4/F),where d is the orbital distance, σ is the Stefan-Boltzmann constant, R_* is the stellar radius,and the flux F=10^9 erg cm^-2 s^-1. We see that this relation does a good job at discriminating between the planets that have excited modes and those that do not. According to Eq. (<ref>), there are three factors that influence the sign of the work function (ignoring perturbations to the energy generation rate).Figure <ref> shows the comparison of these three quantities as well as the differential work function between the A5, 30 M_J planets at orbits of 0.01 AU (blue) and 0.2 AU (red), for the l=0, n=2 mode. This particular mode is excited in the 0.01 AU model, while in the 0.2 AU planet it is not. We choose to display these models in particular as they have the most exaggerated profiles so the effects are easiest to see. Throughout the interior Γ_3-1 is positive and similar in magnitudebetween the models and it is not a critical parameter. We see that for this particular solution the density perturbation for this mode is negative in both models (recall that we are solving the time independent equations). So we are at maximum expansion due to the oscillations. To obtain a postive work function, therefore, a large (positive) luminosity perturbation is thus needed, such that the negative gradient of this term is negative. Indeed this is what is found for the close-in planetary model, where it is observed how much further in radius the light penetrates the outer layers of the planet model at 0.01 AU. An oscillationinduces a positive luminosity perturbation over a larger fraction of the planet's radius. To explore this situation in more detail, Fig. <ref> shows the interior density profiles of the 30 M_J planets around the A5 star at increasing orbital distance.As the planet evolves at a further distance from the star, it does not experience significant heating, and therefore becomes a denser planet with a smaller radius. However, the closer the planet is to the star, the more puffed up and diffuse the outermost layers become. This allows light to penetrate deeper into the planet, thereby allowing for a larger region over which significant luminosity perturbations due to modes can occur. Some hot Jupiters have indeed been observed to have larger radii than initially predicted <cit.>.The positive luminosity perturbation due to the oscillation heats the gas, causing a negative density perturbation by rarefying it.This newly expanded gas acts like a restoring force, but it is not large enough to win out over the induced rarefaction by the external heat source. This is a general trend seen over all stars and all mass ranges. In addition, for the model with the excited n=2 radial mode, its frequency is about 0.3 mHz, or a period of an hour. The thermal timescale in the driving region is similar to this, and thus the thermal conditions for the mode are favorable. The driving takes place in a “transition region”<cit.>. This is also the case for all of the modes we consider that are unstable.Finally, we must also consider the convective timescale in relation to these oscillations. Typically, convection is expected to stabilize the planet against such pulsations. However, the timescale of convection, estimated in a similar fashion as in the case of the Sun (i.e. the dynamical timescale divided by a measure of superadiabaticity), is quite large. In the driving region for the 0.01 AU, 30 Jupiter mass model around the A5 star, for example, the convective timescale is approximately a couple hundred hours, or 2 orders of magnitude larger than thermal timescale and the pulsation period. Therefore, we expect convection to be a passive process regarding these pulsations. §.§ Mode Growth Rate In addition tothe work function needing to be positive in order to excite modes, as well as a thermal timescale similar to the mode period, we also examined the η growth rate parameter as discussed in <cit.>. This parameter is given byη = -σ_i/σ_rwhere σ_i is the imaginary component of the (non-adiabatic) mode eigenfrequency and σ_r is the real component. If η > 0 the mode is overstable and will grow, and if η < 0, the mode is understable and will be damped out. Throughout this analysis, if the work function of a particular mode is positive but η is negative, then the mode is considered to be excited but quickly damped. In Fig. <ref>the modes that are excited and quickly damped are listed as unexcited data points.The values for η, both for the Jupiter and the hot Jupiter modes are mostly within the range calculated by <cit.>. Their values range from 10^-10 to 10^-4 whereas our values range from 10^-14 to 10^-3. Yet the best indication as to whether these modes are overstable by examining η is actually to compute ω/η where ω is the frequency of the mode (in mHz). The mode is almost certainly overstable for large values of ω/η. We calculate a range of 98.8 mHz to 2 × 10^13 mHz with the vast majority of the modes at the upper end of that range.§.§ Radiative Suppression The process by which modes are excited via radiative suppression can now be explicitly explained. The combination of the positive (negative) luminosity perturbation and negative (positive) density perturbation is what is leading to unstable modes. The radiative luminosity perturbation can be shown to be <cit.>:δ L_r/L_r = dr/dln Td/dr(δ T/T)-δκ/κ+4δ T/T+4δ r/r.In our case, the first term on the RHS of Eq. (<ref>) contributes to oscillatory damping and the later two terms are negligible. The dominant term on the RHS is the opacity perturbation, δκ/κ,that drives the oscillations.For example, when we do detect an unstable mode, we indeed find at maximum expansion that the opacity perturbation is rather large and negative, such that Eq. (<ref>) is positive. Figure <ref> shows an example from the same extreme model used earlier. The mechanism at work is as follows. Some weak perturbation in the gas causes it to collapse at a given location. Upon maximum compression, the density, opacity, and temperature all increase. In normal conditions, the temperature increase allows the gas to cool via radiative losses more readily. However, this cooling is suppressed by the high amounts of external radiation heating the gas due to the increased opacity. Fundamentally, it is this absorption of energy that is converted to work to drive the mode. With the increase in energy from the photonic heating, the gas then begins to expand, pushing past the point of equilibrium and eventually to a state of maximum expansion.Here, the opacity perturbation is negative and the gas cools because the luminosity perturbation is positive, increasing outwards (Figs. <ref> and <ref>). The recompression after maximum expansion overshoots the point of equilibrium and the process repeats.This process is in contrast to the “classical” kappa mechanism in two fundamental ways. First, the kappa mechanism is intrinsically an internally-driven phenomenon with the radiation originating from nuclear fusion whereas the radiative suppression is an externally driven phenomenon. Second, in normal stellar conditions, a compression results in a decrease in opacity rather than an increase. The reason the kappa mechanism works is because in unique regions within the stellar interior, compression raises the temperature of the gas to a level at which hydrogen or helium can be ionized, which thereby increases the opacity. The radiative suppression mechanism does not rely on such ionization as a prerequisite. § CONCLUSIONS In this work we developed MESA models of Jupiter and hot Jupiters to ascertain if, and how, nonadiabatic oscillations can be excited. For Jupiter, as expected, oscillations are not excited via the κ-mechanism, as the intrinsicluminosity and received solar flux is too small. However, for giant planets orbiting sufficiently nearbyhot stars, a “radiative suppression” mechanism can drive global oscillations due to the external stellar irradiation. We find approximately that about ∼ 10^9 erg cm^-2 s^-1 of flux is required, which sets the spectral type of host stars and the orbital distance of the planet.Modes are excited in planets closer to the host star because the outer atmosphere becomes heated and distended, allowing for light to penetrate deeper into the planet which allows for perturbations to occur over significantly larger regions of the outer layers. This effect does not seem to depend on the total mass of the planet. Sufficiently large stellar irradiation suppresses a mode's ability to radiatively lose energy, and simultaneously supplies energy to the mode by heating the compressed medium due to an increase in the opacity. This excitation process occurs very near to the surface of the planet: within the outermost 1% of the radius for the models considered here. Increasing the maximum column depth of irradiation does not significantly alter the results. Modes are still excited in the same location. Further study is needed with more detailed models to understand what makes 10^9 erg cm^-2 s^-1 the approximate cutoff stellar flux. Also, a more expansive parameter space is needed with the inclusion of higher order modes to understand the conditions for whenlower order vs. higher order modes are excited. This study leads us to believe that some hot Jupiters will be pulsating. However, as the mode amplitudes are difficult to estimate, it is unclear what implications this has for the upcoming observational capabilities and detection.The authors would like to acknowledge funding from NASA EPSCoR award #NNX14AN67A to NMSU,as well as the New Mexico Space Grant Consortium.
http://arxiv.org/abs/1702.07988v1
{ "authors": [ "Ethan Dederick", "Jason Jackiewicz" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170226045735", "title": "A Possible Mechanism for Driving Oscillations in Hot Giant Planets" }
http://arxiv.org/abs/1702.08452v2
{ "authors": [ "Sophia Han", "Andrew W. Steiner" ], "categories": [ "astro-ph.HE", "astro-ph.SR", "nucl-th" ], "primary_category": "astro-ph.HE", "published": "20170227145822", "title": "Cooling of neutron stars in soft X-ray transients" }
Institut für Kernphysik der Universität Mainz, Johann-Joachim-Becher-Weg 45, 55099 Mainz, Germany Institute for Nuclear Studies and Department of Physics, The George Washington University, Washington, DC 20052, USA Helmholtz-Institut für Strahlen- und Kernphysik der Universität Bonn, Nußallee 14-16, 53115 Bonn, GermanyInstitute for Nuclear Studies and Department of Physics, The George Washington University, Washington, DC 20052, USA We compare the methods of amplitude reconstruction, for a complete experiment and a truncated partial-wave analysis, applied to the electroproduction of pseudoscalar mesons. We give examples which show, in detail, how the amplitude reconstruction (observables measured at a single energy and angle) is related to a truncated partial-wave analysis (observables measured at a single energy and a number of angles). A connection is made to existing data. 25.20.Lj, 25.30.Rw, 11.80.Et, 11.55.Bq File: .tex Amplitude reconstruction from complete electroproduction experiments and truncated partial-wave expansions H. Haberzettl December 30, 2023 ==========================================================================================================§ INTRODUCTION AND MOTIVATION There have been numerous recent efforts to extract maximal information, unbiased by any particular model, from experimental pseudoscalar photoproduction data. These have included the study of complete experiment analyses <cit.> (CEA) and truncated partial-wave analyses <cit.> (TPWA). Legendre analyses directly applied to data <cit.> have the same motivation. The CEA determines helicity or transversity amplitudes at a single energy and angle, up to an overall (energy and angle dependent) phase.The TPWA introduces a cutoff to the partial-wave series, obtaining multipoles for a fixed energy, with an overall unknown phase dependent only on energy.The methods used to study the photoproduction of pseudoscalar mesons can be extended to the case of electroproduction, with the introduction of longitudinal amplitudes associated with the incoming virtual photon. An examination of the CEA was performed by Dmitrasinovic, Donnelly and Gross <cit.> who considered the required polarization measurements. They concluded that a CEA, determining the electroproduction transversity amplitudes up to an overall phase, was not possible with either recoil or target polarization measurements alone, but required at least one measurement from the other polarization set. They further concluded that a CEA could be constructed without the need for more complicated measurements involving both a polarized target and recoil polarization detection. These conclusions assumed that all structure functions could be separated in a set of measurements. As in all such studies, it was also implicitly assumed that measurements could be made arbitrarily precise.Here we generalize our recent study <cit.> of the CEA and TPWA in photoproduction to electroproduction. While the study in Ref. <cit.> focused on the CEA, in practice, one desires multipole amplitudes that can be associated with resonance contributions. These cannot be directly obtained from a complete set of transversity amplitudes and the methods used in solving the CEA and TPWA problems are quite different, as was discussed in detail in Ref. <cit.>.The electroproduction reaction, unlike photoproduction, requires detailed knowledge of the electron scattering process producing the interacting virtual photon. As the electron scattering and outgoing hadronic particles define two different planes, a second angle defining their relative orientation is required, as shown in Fig. <ref>. The virtual photon can have a non-zero value for its 4-momentum squared, which allows for the independent variation of photon energy and momentum.This non-zero value also complicates the spin structure, requiring the introduction of both longitudinal and transverse components, as described in Refs. <cit.>. Below, we first review the electroproduction formalism. We then consider both simple and more realistic examples of the CEA and TPWA process, showing how the experimental requirements change. § CROSS SECTION AND POLARIZATION DEGREES OF FREEDOM Here we follow the notation of Ref. <cit.> to describe the pseudoscalar meson electroproduction process. As denoted in Fig. <ref>, Θ_e is the electron scattering angle while q and k are the respective 4-vectors for the virtual photon and outgoing meson, with q^2 = ω^2 - q^2, ω and q being the photon energy and 3-momentum. The momentum transfer is denoted by Q^2 = -q^2 and the “photon equivalent energy” is given by k_γ^lab = (W^2 - m_i^2)/2m_i, where W is the center-of-mass energy of the hadronic system and m_i is the mass of the initial nucleon. The degree of transverse polarization of the virtual photon isε = ( 1 + 2q^2 Q^2tan^2 Θ_e2)^-1 ,with q and Θ_e expressible in either the lab or c.m. frame. The longitudinal polarization,ε_L = Q^2 ω^2 ε ,is frame dependent.Experiments with three types of polarization can be performed in meson electroproduction: electron beam polarization, polarization of the target nucleon and polarization of the recoil nucleon. Target polarization will be described in the frame { x, y, z }, with the z-axis pointing in the direction of the photon momentum q̂, the y-axis perpendicular to the reaction plane, ŷ = q̂×k̂ / sinθ, where k̂ is the direction of the outgoing meson, and the x-axis given by x̂ = ŷ×ẑ. For recoil polarization we will use the frame { x', y', z' }, with the z'-axis defined by the momentum vector of the outgoing meson, the y'-axis parallel to ŷ, and the x'-axis given by x̂' = ŷ' ×ẑ'. These frames are displayed in Fig. <ref>. The most general expression for a coincidence experiment considering all three types of polarization is dσ_v/dΩ= | k⃗ |/k_γ^cm P_α P_β{ R_T^βα + ε_L R_L^βα+ [ 2 ε_L ( 1 + ε) ]^1/2 ( ^cR_LT^βαcosϕ + ^sR_LT^βαsinϕ)+ ε ( ^cR_TT^βαcos 2 ϕ + ^sR_TT^βαsin 2 ϕ) + h [ 2 ε_L ( 1 - ε ) ]^1/2 ( ^cR_LT'^βαcosϕ + ^sR_LT'^βαsinϕ) + h ( 1 - ε^2 )^1/2 R_TT'^βα} , where h is the helicity of the incoming electron, P_α = (1, P⃗)_α and P_β = (1, P⃗')_β. Here P⃗ = (P_x, P_y, P_z) denotes the target and P⃗' = (P_x', P_y', P_z') the recoil polarization vector. The zero components, P_0 = 1, lead to contributions in the cross section which are present in the polarized as well as the unpolarized case. In an experiment without target and recoil polarization, α = β = 0 and the only remaining contributions are R_i^00. The functions R_i^βα describe the response of the hadronic system in the process. Summation over Greek indices (0,1,2,3) is implied. An additional superscript s or c on the left indicates a sine or cosine dependence of the respective contribution on the azimuthal angle. Some response functions vanish identically (see Table <ref> of Ref. <cit.> for a systematic overview). The number of different response functions is further reduced by equalities, as shown in Table <ref>, and in the most general electroproduction experiment, 36 polarization observables can be determined. The response functions R^βα_i are real or imaginary parts of bilinear forms of the CGLN <cit.> amplitudes depending on the scattering angle θ.§ AMPLITUDES USED IN PSEUDOSCALAR MESON ELECTROPRODUCTION Before comparing the CEA and TPWA approaches, we continue with a review of notation used for the underlying amplitudes. The multipoles and CGLN <cit.> F-amplitudes are related byF_1= ∑_ℓ≥ 0[ ( ℓ M_ℓ + + E_ℓ +) P'_ℓ + 1 +( (ℓ + 1) M_ℓ - + E_ℓ -) P'_ℓ - 1] , F_2= ∑_ℓ≥ 1[ (ℓ + 1) M_ℓ + + ℓ M_ℓ -]P'_ℓ , F_3= ∑_ℓ≥ 1[ ( E_ℓ + - M_ℓ +) P”_ℓ + 1 +( E_ℓ - + M_ℓ -) P”_ℓ - 1] , F_4= ∑_ℓ≥ 2[ M_ℓ + - E_ℓ + -M_ℓ - - E_ℓ -]P”_ℓ , F_5= ∑_ℓ≥ 0[ (ℓ + 1) L_ℓ + P'_ℓ +1 -ℓL_ℓ - P'_ℓ - 1] , F_6= ∑_ℓ≥ 1[ ℓL_ℓ -- (ℓ + 1) L_ℓ +] P'_ℓ . The definition of helicity amplitudes is subject to phase conventions. Here, we choose the conventions of <cit.>, which were also used by Walker in <cit.> for photoproduction. Without loss of generality, we set ϕ=0, H_1= -1/√(2)sinθcosθ/2 ( F_3 + F_4 ) , H_2= √(2)cosθ/2( F_2 - F_1 + ( F_3 - F_4 ) sin^2 θ/2) , H_3= 1/√(2)sinθsinθ/2 (F_3 - F_4) , H_4= √(2)sinθ/2(F_1 + F_2 + (F_3 + F_4) cos^2 θ/2) , H_5= cosθ/2 (F_5 + F_6 ), H_6= -sinθ/2 (F_5 - F_6). Finally, transversity amplitudes can be constructed <cit.> from these helicity amplitudes,b_1= 1/2[ ( H_1 + H_4 ) + i( H_2 - H_3 ) ] ,b_2= 1/2[ ( H_1 + H_4 ) - i( H_2 - H_3 ) ] ,b_3= 1/2[ ( H_1 - H_4 ) - i( H_2 + H_3 ) ] ,b_4= 1/2[ ( H_1 - H_4 ) + i( H_2 + H_3 ) ] , b_5= 1/√(2)[ H_5 + i H_6 ] , b_6= 1/√(2)[ H_5 - i H_6 ] . Here we note that the definitions of both helicity and transversity amplitudes are not unique. Apart from phase conventions,different numbering choices can also be found in the literature. Here we follow the definitions of Barker et al. <cit.>. In Table <ref>, expressions for the response functions, appearing in Eq. (<ref>), are given in terms of both the helicity and transversity amplitudes.In the following, we will suppress the superscripts c and s for interference terms. As can be seen in Table <ref>, for a specific polarization, the assignment of this superscriptis always unique.Transversity amplitudes often simplify the discussion of amplitude reconstruction in photoproduction, as the unpolarized and single-polarization observables determine their moduli. Another simplification is the propertyb_2(θ) = -b_1(- θ) ,b_4(θ) = -b_3(- θ) , and b_6(θ ) = b_5(- θ) ,which allows one to parameterize only three of the six transversity amplitudes. The form introduced by Omelaenko <cit.>, b_1= c a_2Le^iθ /2(1+x^2)^L∏_i=1^2L (x-α_i) , b_3= -c a_2Le^iθ /2(1+x^2)^L∏_i=1^2L (x-β_i) , with x=tan ( θ /2 ) and L being the upper limit for ℓ, is convenient for a truncated partial-wave analysis, as the ambiguities can be linked to the conjugation of the complex roots of the above relations, with a constraint∏_i=1^2Lα_i =∏_i=1^2Lβ_i .The quantity c is a constant and a_2L is proportional to the backward photoproduction cross section <cit.>.For the amplitudes b_5 and b_6, which are present in electroproduction in addition to the four transverse amplitudes, it is feasible to write a linear-factor decomposition according to Omelaenko, similar to expressions (<ref>) and (<ref>). As the resulting non-redundant transversity amplitude, we pick here b_6 and the expression isb_6 = c d_2Le^ iθ /2(1+x^2)^L∏_i=1^2L (x-γ_i) .The amplitude b_5 is then specified via the constraint given in (<ref>). The 2L complex roots γ_i determine the purely longitudinal amplitudes b_5 and b_6, while the constant c is the same as in (<ref>) and (<ref>). The quantity d_2L is another polynomial normalization coefficient, which may differ from a_2L.However, no constraint among the γ-roots has been found which would be analogous to Omelaenko's relation (<ref>) for the α- and β-roots and we conjecture that no such additional constraint for the γ_i exists. This may be substantiated by the fact that the number of real degrees of freedom for the parameterizations of b_5 and b_6 in terms of multipoles, as well as in terms of roots, exactly match.For every truncation order L, one has 2L+1 complex longitudinal multipoles, i.e. the S-wave L_0+ and two new multipoles L_ℓ± for every new order in ℓ. This corresponds in terms of mulipoles to 4 L +2 real degrees of freedom. In terms of roots, one has the γ_i which comprise a set of 2L complex variables or 4L real degrees of freedom. In addition to this, the complex normalization coefficient d_2L also defines b_5 and b_6, which brings the total number of real variables to 4L+2 in this case as well.The only issue not considered until now is the overall phase, either of (for instance) L_0+, in case of the multipole-parametrization, or d_2L in case of roots, which remains undetermined if only longitudinal observables are measured. This would reduce the number of real degrees of freedom by one. However, in electroproduction, the mixed observables of type LT can very well fix this overall phase, leaving the unknown phase information in one of the quantities specifying the purely transverse amplitudes, e.g. E_0+. Therefore, the number 4L+2 real variables for longitudinal multipoles remains true for the most general case in electroproduction.For the transverse multipoles, the situation is the same as in photoproduction with 4L multipoles, i.e. the S-wave E_0+, the P-waves E_1+,M_1+,M_1- and four new multipoles E_ℓ±,M_ℓ± for every new order in ℓ. If we subtract the overall free phase, which is typically assumed for the E_0+ multipole, we have 8L-1 real values to be determined by the experiment.Altogether with longitudinal and transverse multipoles, the most general case in electroproduction is described by 6L+1 E,M,L multipoles, and 12L+1 real values have to be determined by the experiment. And one of those, e.g. E_0+, can be chosen to be positive. § COMPLETE EXPERIMENT ANALYSIS (CEA) In electroproduction, the CEA needs to determine six complex amplitudes at a given energy and angle, e.g. helicity amplitudes H_1,…, 6 or transversity amplitudes b_1,…, 6 up to an overall phase, which is naturally also energy and angle dependent. This requires the determination of 11 real numbers, where one of them can be chosen to be positive. In principle this could work with 11 observables, but due to quadrant ambiguities, a minimum of 12 will be generally required.Choosing 12 observables out of 36 will allow more than a billion different sets. Even restricting to meaningful sets, including transverse, longitudinal and LT interference terms, still gives millions of non-trivial sets that need to be checked for completeness.Two strategies seem to work straightforwardly. First, one would select the six observables that are defined only by moduli of transversity amplitudes, R_T^00,R_T^0y,R_T^y'0,R_L^00,R_L^0y,R_TT^00. Then five relative angles need to be defined from six out of the remaining 30 interference terms. Even if thousands of such sets will lead to complete sets of 12 observables, it is not obvious how these observables should be chosen. As can be seen in Table <ref>, except for b_5 b_6^*, all interference terms appear as linear combinations, e.g. b_1 b_2^*± b_3 b_4^* and a direct separation would always require a measurement of both ± combinations. Therefore, a separation of 5 angles as cosine and sine functions would naively require 10 observables, leading altogether to 16, and it is nontrivial to reduce this number by four observables to find the minimum number of eight.A second approach is to start with a complete set of 8 observables for the transverse amplitudes b_1,b_2,b_3,b_4 in a CEA of photoproduction. Such studies are also nontrivial, but have been intensively studied in the literature, and the most comprehensive study was done by Chiang and Tabakin <cit.>. Having chosen any of almost 4500 possible complete sets of 8 observables leads to a unique determination of four moduli and 3 relative angles. Then with four additional LT interference terms, such as (b_1 b_5^* ± b_2 b_6^*)and (b_1 b_5^* ± b_2 b_6^*), the remaining moduli |b_5|,|b_6| and the relative phases of b_5 and b_6 to the already known transverse amplitudes b_1,b_2 are uniquely determined. This leads to, for example, the complete set of 12 observables R_T^00,R_T^0y,R_T^y0,R_TT^00,R_TT^0x,R_TT'^0x, R_TT^z'0,R_TT'^z'0,R_LT^x'0,R_LT^z'0,R_LT'^x'0,R_LT'^z'0. In this case four LT interference terms with beam-recoil polarization have been used.Alternatively, another three combinations can be chosen with b_2 b_5^* ± b_1 b_6^*, b_3 b_5^* ± b_4 b_6^* and b_4 b_5^* ± b_3 b_6^*. Looking at Table <ref>, one finds that the first set, b_1 b_5^* ± b_2 b_6^*, requires recoil polarization, the second one, b_2 b_5^* ± b_1 b_6^*, target polarization and the third one, b_3 b_5^* ± b_4 b_6^*, would even require both target and recoil polarization. The last one, b_4 b_5^* ± b_3 b_6^*, corresponds to the observables R_LT^00,R_LT^0y,R_LT'^00,R_LT'^0y which is identical to R_LT^00,R_LT^y'0,R_LT'^00,R_LT'^y'0 and can therefore be measured with either target or recoil polarization.By this rather simple strategy, we have already found four times the number of possible complete photoproduction sets, which amounts to almost 18000 complete sets of electroproduction.Using the Mathematica NSolve function and integer algebra for randomly chosen real and imaginary parts of amplitudes, we can test any given set of 12 observables for completeness. Given the enormous number of possibilities with hundreds of millions of sets with 12 observables (where only R_T^00 is set), we have not yet performed a systematic search for all possible complete sets aswas done for photoproduction in our previous work <cit.>.§ AMPLITUDE RECONSTRUCTION§.§ Simplest case: L = 0 In photoproduction this case is trivial, involving only a single multipole amplitude. Here, in Set 1 of Table <ref>, there are two multipoles (E_0+ and L_0+), producing two independent helicity or transversity amplitudes, requiring only 3 measurements (e.g. R_T^00, R_LT^0y, R_LT'^0y) at a single energy and angle, which solves both the CEA and TPWA. This is a special case, where the absolute squares of the two multipoles are not mixed together, but already separated in R_T^00=|E_0+|^2 and R_L^00=|L_0+|^2. Therefore, R_T^00 gives directly the E_0+ multipole, which can freely be taken with a positive value, and for the absolute value |L_0+| and the relative angle, the two selected LT interference terms are sufficient.It should be noted, however, that in principle, through the Rosenbluth separation of R_T and R_L, the determination of R_T gives also R_L, and therefore the three observable case is essentially academic; in practice a fourth measurement needs to be done. We will return to this Rosenbluth issue later on. §.§ Case: J= 1/2 Here, in Set 2 of Table <ref>, there are four multipoles involved (E_0+, M_1-, L_0+, L_1-) producing four independent helicity or transversity amplitudes. The separation into longitudinal and transverse pairs suggests two strategies for finding a complete set of eight measurements for a CEA in this case. Sets of four observables would determine either the transverse or longitudinal pairs, up to an overall phase, but would leave the relative phase between the pairs undetermined. One method: Take the set of four measurements determining (E_0+ and M_1-) up to an overall phase (R_T^00, R_T^y'0, R_T^x'z, R_T^z'z). Add to this a set of four measurements defining the relative phases of L_0+ and L_1- to E_0+ and M_1- respectively (R_LT^0y, R_LT^x'x, R_LT^z'0, R_LT'^0y). Second method: Take the sets of four measurements defining the longitudinal and transverse pairs up to an overall phase. Remove one measurement from each set and replace with a pair of interference terms. This leads, for example, to the set (R_T^00, R_T^y'0, R_T^z'z, R_L^00, R_L^0y, R_L^z'x, R_LT^00, R_LT'^00).Furthermore, longitudinal observables R^βα_L can be avoided by getting the same information from LT interference terms, and a solution is found with a minimum number of five observables, with some of these measured at two angles.As a general rule, for n complex multipoles we need 2n independent measurements. Due to the free overall phase (we always assume E_0+ real and positive), there are 2n-1 free parameters. However, in order to solve the quadrant ambiguity, we generally need one more measurement. In the special case of L=0 (Set 1) this was not needed but, as was mentioned, this case is exceptional.§.§ Comparing CEA and TPWA beyond J=1/2 In Set 3 of Table <ref>, we study a purely longitudinal model, with two complex helicity (H_5,H_6) or transversity amplitudes (b_5,b_6), four possible polarization observables, see Table <ref> and 2L+1 complex multipoles L_ℓ±. With all four observables, a CEA is possible and can determine the two complex amplitudes up to a phase. But a TPWA with three multipoles requires six measurements and is therefore not possible at a single angle. However, we find a solution with four observables at maximally two angles, and also with a minimal number of three observables, measured at maximally three angles, a solution exists.Set 4 is identical to the photoproduction case. Here only electric and magnetic multipoles contribute, and as discussed in our previous paper <cit.> a TPWA at a single angle is not possible. This set can be uniquely resolved with only four observables requiring only beam and target polarization: R_T^00[3],R_TT^00[1],R_TT^0x[2],R_TT'^0x[2], which are identical to the photoproduction observables I[3] ,Σ̌[1] ,Ȟ[2] ,F̌[2].In Set 5, we discuss a model with six multipoles and six non-vanishing amplitudes. In this case the CEA and TPWA are equivalent and both can be resolved with the same number of 12 observables measured at a single angle. Again, when the information from more than one angle is available, the number of observables can be drastically reduced to only five, which need to be measured at maximally three angles.Finally, in Set 6, we discuss the full set of seven S,P wave multipoles, which requires 14 measurements for a unique solution. In this case we find a minimal number of six observables, where again recoil polarization can be completely avoided. A similar set is also possible that completely avoids target polarization. With a total number of 36 observables, a huge number of possibilities exist that could be used to resolve all ambiguities.The results of Set 6 with 14 measurements of six observables and two angles for L=1 can be generalized theoretically for arbitrary L, as was found in photoproduction <cit.>. For each additional angular momentum, ℓ, each observable obtains two more Legendre coefficients, and therefore allows for two additional independent angular measurements. The number of multipoles increases with 6L+1 and the number of different measurements by n=12L+2. With six observables, the number of measurements increases by 12 for each additional angular momentum, therefore there is no principal limit for L. In practice this is, however, very different. Our present numerical simulations are approaching a limit for L=3. All examples with L=1 are calculated with the Mathematica NSolve function, giving exact solutions within integer algebra. This approach was no longer successful for L=2, therefore, instead of finding exact solutions, we have done a minimization of the coupled equations using the Mathematica NMinimize function and random search methods. This worked very well and for the solutions with L=2 the squared numerical deviation was found to be of the order 10^-20, in agreement with our work on photoproduction. §.§ TPWA without Rosenbluth separation So far, we have always assumed that a complete separation of all observables (response functions) of Eq. (<ref>) has been obtained in a first preparatory step. For most of these, e.g. with ϕ dependence or beam polarization h, this is straightforward and has been applied very successfully in the past. A problem is the so-called Rosenbluth separation between R_T and R_L, which is experimentally very challenging and has only been done in a very few cases <cit.>. However, for a TPWA the combination R^β,α_T+ε_L R^β,α_L can be used and a separation is not necessary. In many cases that are discussed in Table <ref>, the observables R^β,α_T can be replaced by the Rosenbluth combinationsR_RB^β,α = R^β,α_T+ε_L R^β,α_L ,and we find a unique solution for all included partial waves. In the special case of Set 1, with only three observables, this is not possible and a fourth observable is needed.In 2005, the Hall A Collaboration at JLab published a measurement on `Recoil Polarization for Δ Excitation in Pion Electroproduction', where 14 separated response functions plus two Rosenbluth combinations had been observed in full angular distributions at W=1.23 GeV and Q^2=1.0 (GeV/c)^2 <cit.>. In our notation, these areR_RB^00, R_RB^y'0, R_TT^00, R_TT^x'0, R_TT^y'0, R_TT^z'0, R_LT^00, R_LT^x'0, R_LT^y'0, R_LT^z'0, R_LT'^00, R_LT'^x'0, R_LT'^y'0, R_LT'^z'0 , R_TT'^x'0, R_TT'^z'0 .For a CEA, this set of observables is not complete. A complete experiment analysis for electroproduction needs a minimum of 12 observables including both target and recoil polarization. In fact, with two more observables involving also target polarization, a CEA would be possible. These are e.g. R_LT^0x,R_LT^0z or R_TT^0x,R_TT^0z or R_LT^0x,R_TT^0x or many other combinations.For a TPWA, however, the 16 observables from the Hall A experiment are by far complete. Only a subset of 6 observables, at maximally 3 angles, is needed for a unique solution of all S,P wave multipoles, e.g. R_RB^00[3],R_RB^y'0[2],R_LT^00[2],R_LT^x'0[2],R_LT'^00[2],R_LT'^x'0[3].§ CONCLUSIONS We have explored the CEA and TPWA approaches to pseudoscalar-meson electroproduction, extending our previous study of photoproduction. Simple examples, corresponding to a low angular momentum cutoff, simplify the discussion and allow one to see how the CEA and TPWA are related. As in photoproduction, the TPWA can be accomplished with fewer observable types supplemented by additional angular measurements. The resulting TPWA (multipole) amplitudes have an undetermined phase depending on energy while the CEA (transversity or helicity) amplitudes are found with an unknown overall phase depending on both energy and angle. Comparisons are given for representative cases in Table <ref>.The CEA requires measurements involving both polarized targets and recoil polarization, as was stressed in the study of Ref. <cit.>. This is similar to the finding, for CEA analyses and photoproduction, that measurements are required from two out of the three groups containing beam-target, beam-recoil, and target-recoil observables. Triple polarization experiments give no further information in photoproduction, which is different from electroproduction.For purely transverse observables it is the same, but for purely longitudinal L and longitudinal-transverse interference terms LT and LT' this is different. Already the terms without target and recoil polarization, R_L^00, R_LT^00 and R_LT'^00 have to be counted as single beam polarizations with a polarized virtual photon. By this way of counting, there are six triple polarization observables, see Table <ref>, all of which can be measured in an alternative triple polarization measurement. In electroproduction, as in photoproduction, all 36 observables can be measured in an alternative way, giving in total 72 possibilities for allowed measurements. However, as was found in Ref. <cit.>, the TPWA can be accomplished without involving observables having both polarized targets and recoil polarization. This is not the case for a CEA, where at least 2 observables have to be chosen from another group. This finding from photoproduction carries over to electroproduction without further modification.The present formalism can be immediately applied to data. In fact, there exists a dataset <cit.> which measured 16 observables, mostly with recoil polarization but was conducted without a polarized target. Even though this set was not complete for a CEA, it was by far enough to fulfill the requirements of a complete TPWA.The work of HH and RW was supported in part by the U.S. Department of Energy Grant DE-SC0016582. The work of LT and YW was supported by the Deutsche Forschungsgemeinschaft (SFB 1044 and SFB/TR16). 99CEA W.-T. Chiang and F. Tabakin, Phys. Rev. C 55, 2054 (1997).TPWA R.L. Workman, L. Tiator, Y. Wunderlich, M. Döring, H. Haberzettl,Phys. Rev. C 95, no. 1, 015206 (2017).Leg Y. Wunderlich, F. Afzal, A. Thiel, R. Beck, arXiv:1611.01031.DDG V. Dmitrasinovic, T.W. Donnelly, and F. Gross,in Research Program at CEBAF (III), RPAC III, edited by F. Gross (CEBAF, Newport News, 1988), p.547.Drechsel:1992pn D. Drechsel and L. Tiator,J. Phys. G 18, 449 (1992).Knochlein:1995qz G. Knöchlein, D. Drechsel and L. Tiator,Z. Phys. A 352, 327 (1995). CGLN G.F. Chew, M.L. Goldberger, F.E. Low, and Y. Nambu,Phys. Rev. Lett. 106, 1345 (1957).Jacob M. Jacob and G. C. Wick,Annals Phys.7, 404 (1959). Walker R.L. Walker, Phys. Rev. 182, 1729 (1969).bds I.S. Barker, A. Donnachie, and J.K. Storrow, Nucl. Phys. B 95, 347 (1975).omel A.S. Omelaenko, Sov. J. Nucl. Phys. 34, 406 (1981).wunder Y. Wunderlich, R. Beck, and L. Tiator, Phys. Rev.C 89, 055203 (2014).Blomqvist:1997qv K. I. Blomqvist et al.,Nucl. Phys. A 626, 871 (1997).Defurne:2016eiy M. Defurne et al., Phys. Rev. Lett.117, no. 26, 262001 (2016).Kelly:2005jj J. J. Kelly et al., Phys. Rev. Lett.95, 102001 (2005).
http://arxiv.org/abs/1702.08375v1
{ "authors": [ "L. Tiator", "R. L. Workman", "Y. Wunderlich", "H. Haberzettl" ], "categories": [ "nucl-th", "nucl-ex" ], "primary_category": "nucl-th", "published": "20170227170107", "title": "Amplitude reconstruction from complete electroproduction experiments and truncated partial-wave expansions" }
An SDP-Based Algorithm for Linear-Sized Spectral Sparsification Yin Tat LeeMicrosoft ResearchRedmond, USA He SunThe University of BristolBristol, UK =====================================================================================================We develop a game-theoretic semantics () for the fragmentof the alternating-time temporal logic, thereby extending the recently introduced for .We show that the game-theoretic semantics is equivalent to the standard compositional semantics of with perfect-recall strategies.Based on the new semantics, we provide an analysis of the memory and time resources needed for model checkingand show that strategies of the verifier that use only a very limited amount of memory suffice. Furthermore, using the , we provide a new algorithm for model checking and identify a natural hierarchy of tractable fragments of that substantially extend . § INTRODUCTIONThe full Alternating-time Temporal Logic<cit.> is one of the main logical systems used for formalising and verifying strategic reasoning about agents in multi-agent systems. It is very expressive, and that expressiveness comes at a high (2-EXPTIME) price of computational complexity of model checking. Its basic fragment —which can be regarded as the multi-agent extension of CTL— has, on the other hand, tractable model checking but its expressiveness is rather limited. In particular, only allows expressing strategic objectives of the type AΦ where Φ is a simple temporal goal involving a single temporal operator. The intermediate fragmentnaturally emerges as a good alternative, essentially extending to allowexpressing strategic objectives which are Boolean combinations of simple temporal goals. The price for this is a reasonably higher computational complexity of model checking , viz. PSPACE-completeness <cit.>. Still, the PSPACE-completeness result alone gives a rather crude estimate of the amount of computational resources, such as memory, needed for model checking .Main ideas and contributions.In this paper we take an alternative approach to the semantic analysis and model checking of fragments of , concentrating in particular on fragments of . Our analysis is not based on the standard compositional semantics but on a new, game-theoretic semantics (). The main aims and contributions of the paper are three-fold:* We introduce an adequate game-theoretic semantics forequivalent to the standard (perfect-recall) compositional semantics. * We propose new model checking algorithms for and some of its fragments, using the developed here, rather than the standard semantics. We also analyse more precisely the use of memory resources in via . * We apply the -based approach to model checking in order to identify new tractable fragments of . The main part of the paper consists of a detailed presentation and analysis of the new for . In particular, we obtain results similar to those in our earlier work <cit.>, where we defined a for .We establish, inter alia, the surprising resultthat it is always sufficient to consider finite paths only whenformulae are evaluated via , even when considering infinite models.Since we are dealing withas opposed to , a range of new technical ideas and mechanisms are needed for the correct evaluation of multiple temporal goals pursued simultaneously by the proponent coalition.The approach via enables us, inter alia, to perform a more precise analysis on the memory resources needed for evaluating -formulae than the algorithm from <cit.>which employs a mix of a path construction procedure for checking strategic formulae AΦ on one hand, and the standard labelling algorithmon the other hand.Our model checking algorithm for follows uniformly a procedure directly based on and also enables us, inter alia, to identify and correct a flaw in the model checking procedure of <cit.> and some of the claims on which it is based. Yet, the PSPACE upper bound result of <cit.> is easily confirmed by our algorithm, and we provide a newsimple proof of that result. Besides new methods, we use some nice ideas from <cit.>.As a new complexity result obtained via , we identify anatural hierarchy of fragments of thatextend and have tractable (PTIME-complete) model checking. The hierarchy is based on bounding the Boolean strategic width of formulae.We denote the new fragments in the hierarchy by ^k for different positive integers k.Here ^k contains those formulae ofwhere subformulae AΦ are restricted such that Φ is a Boolean combination of at most k formulae. Note that thus ^1 corresponds to plain .The current paper extends the results in <cit.>, where a for is considered, in various non-trivial ways. Firstly, several new ideas and technical notions, such as the role of a seeker and the use of a truth function, are introduced here in order to enable the transition from to in the setting.Secondly, a useful and generally elucidating link between our and Büchi games is identified. That link applies readily also to the simpler evaluation games in <cit.>.Thirdly, and most importantly, we show how to use the new upgraded semantics in a model checking procedure for and the fragments ^k. This would not have been possible with the semantics of <cit.>. The current paper is the journal version of <cit.>. We extend <cit.> by, inter alia, including a range of new results on systems of bounded semantics based on finite transducers. We analyse the amount of memory resources needed for winning strategies and establish tight lower and upper bounds for it.We notice that in transducer based semantics, an exponential amount of memory with respect to formula size is required.However, only a linear amount of this is actually used in any concrete single evaluation process of a formula. Based on this we argue that the transducer based approach does not give a complete analysis for the requirement of memory resources. Other, indirectly related works include a series of papers, incl.<cit.>,<cit.>,<cit.>, <cit.>on a variety of explicit strategy logics, as the development of game-theoretic semantics is a natural challenge arising in the present context. Structure of the paper. After the preliminaries in Section <ref>,we define a bounded, finitely bounded, and unbounded game-theoretic semantics for in Section <ref>. In Section <ref> we analyse the various properties of the novel systems of . In Section <ref> we prove equivalence of the bounded and unbounded versions with the standard compositional semantics of with perfect recall strategies. In Section <ref> we apply the to the model checking problem for and identify a hierarchy of tractable fragments of it.In Section <ref> we study the transducer-based bounded memory semantics for these fragments. We then conclude in Section <ref>. § PRELIMINARIES In this section we define concurrent game models and the syntax and the (perfect-recall) semantics for . We also introduce some new terminology and notations that will be used later in this paper.A concurrent game model() is a tuple ℳ := (, , , , d, o, v) which consists of:– The following non-empty sets: agents = {a_1,…,a_k}, states , proposition symbols , actions ; – The following functions: an action functiond:×→𝒫()∖{∅}which assigns a non-empty set of actions available to each agent at each state; a transition function o which assigns an outcome state o(q, )to each state q∈ and action profile (atuple of actions = (α_1,…,α_k)such that α_i∈ d(a_i,q) for each a_i∈);and finally, a valuation function v:→𝒫(). We use symbols p, p_0, p_1, …to denote proposition symbols and q, q_0, q_1, …to denote states. Sets of agents are called coalitions. The complement A= ∖ A of a coalition A is the opposing coalition of A. The set (A,q) of action tuples available tocoalition A at state q∈ is defined as(A,q):={(α_i)_a_i∈ A|α_i∈ d(a_i,q)for eacha_i∈ A}.Let ℳ^* = (, , , , d, o, v), where: = {a_1,a_2}, = {q_0,q_1,q_2,q_3,q_4}, = {p_1,p_2,p_3},= {α,β}, and d, o and v defined as shown below:= {a_1,a_2}, = {q_0,q_1,q_2,q_3,q_4} = {p_1,p_2,p_3}, = {α,β}d(a_2,q_0)=d(a_1,q_1)={α,β}and else d(a_i,q_i)={α}o(q_0,αα)=q_1, o(q_0,αβ)=q_2, o(q_1,αα)=q_2, o(q_1,αβ)=q_3, o(q_2,αα)=q_1 and o(q_3,αα)=q_3 v(p_1)={q_2,q_4}, v(p_2)={q_3}andv(p_3)={q_1}. [scale=1.8] at (-0.3,0) [draw, circle] (0) p_1; at (1,0) [draw, circle] (1) p_3; at (-0.3,1) [draw, circle] (2) p_1; at (1,1) [draw, circle] (3) p_2; at (2.3,0) [draw, circle] (4) p_1; at (-1.5,1.3) ℳ^*:; at (-0.6,-0.3) q_0; at (1.3,-0.3) q_1; at (-0.6,1.3) q_2; at (1.3,1.3) q_3; at (2.6,0.3) q_4; [-latex] (0) to node[below] αα (1); [-latex] (0) to node[left] αβ (2); [-latex] (2) to node[above] αα (3); [-latex, bend left] (1) to node[left] αα (3); [-latex, bend left] (3) to node[right] αα (1); [-latex] (1) to node[below] βα (4); [-latex, loop right] (4) to node αα (4);Let ℳ = (, , , , d, o, v) be a .A path in ℳ is a sequence Λ:ℕ→ of statessuch that for each n∈ℕ, we haveΛ[n+1] = o(Λ[n],) for some admissible action profile in Λ[n]. A finite path (aka history) is a finite prefix sequence of a path in ℳ.We let (ℳ) denote the set of all paths in ℳ and _fin(ℳ) the set of all finite paths in ℳ.[Note that, accordingly this terminology, a “path” always refers to an infinite path. We use this terminology since we mostly consider infinite paths.]A positional strategy of an agent a∈ is a function s_a: → such that s_a(q)∈ d(a,q) for each q∈. A perfect-recall strategy, or hereafter juststrategy, of agent a∈ is a functions_a: _fin(ℳ)→ such that s_a(λ)∈ d(a,λ[k]) for each λ∈_fin(ℳ) where λ[k] is the last state in λ. A collective strategy S_A for A⊆ is a tuple of individual strategies, one for each agent in A. With (q,S_A) we denote the set of all paths emerging in plays beginning from q where the agents in A follow the strategy S_A.The formulae of are defined by the following grammar.State formulae: ::= p ||∨|A (p∈)Path formulae: ::=||∨||Other Boolean connectives are defined as usual, and furthermore, ,and are abbreviations for ⊤, (⊤), and() respectively.With Φ and Ψ we denote path formulae only; φ, ψ, and χ denote any formulae. Let ℳ be a .Truth of state and path formulae of is defined, respectively, with respect to states q∈ St and paths Λ∈(ℳ), inductively as follows, where φ, ψ are state formulae: * ℳ,q piffq∈ v(p) (for p∈). * ℳ,qφ iff ℳ,qφ. * ℳ,qφ∨ψ iff ℳ,qφ or ℳ,qψ. * ℳ,qAΦ iff there exists a (perfect-recall) strategy S_A such that ℳ,ΛΦ for each Λ∈(q,S_A). * ℳ,Λφ iff ℳ,Λ[0]φ. * ℳ,Λφ iff ℳ,Λ[1]φ. * ℳ,ΛΦ iff ℳ,ΛΦ. * ℳ,ΛΦ∨Ψ iff ℳ,ΛΦ or ℳ,ΛΨ. * ℳ,Λφψ iff there exists i∈ℕ such that ℳ,Λ[i]ψ and ℳ,Λ[j]φfor all j < i. The set of subformulae, (φ), of a formula φ is defined as usual. Subformulae with a temporal operator as the main connective will be called temporal subformulae, while subformulae withas the main connective are strategic subformulae. The subformula Ψ of a formulaφ = AΨ is called the temporal objective of φ.We also define the set 𝐴𝑡(Φ) of relative atoms of Φ as follows: * 𝐴𝑡(χ∨χ^') = 𝐴𝑡(χ)∪𝐴𝑡(χ^') and 𝐴𝑡(χ) = 𝐴𝑡(χ). * 𝐴𝑡(Aχ) = {Aχ} and 𝐴𝑡(p) = {p}for p∈Π. * 𝐴𝑡(χχ^') = {χχ^'}and 𝐴𝑡(χ) = {χ}.We say that χ∈𝐴𝑡(Φ) occurs positively (resp. negatively) in Φ if χ has an occurrence in the scope of an even (resp. odd) number of negations in Φ. We denote by SUB_𝐴𝑡(Φ) the subset of (Φ) containingall relative atoms of Φand also all Boolean combinations χ of these relative atoms such that χ∈SUB(Φ). Letφ^*:= p_2∧ (p_1∨a_1Ψ),φ^*:= a_1Ψ, where Ψ:= ( p_3∧a_2 p_1)∨( p_1∧( p_1) p_2).Written without using abbreviations, Ψbecomes ( p_3 ∨a_2 p_1)∨((⊤ p_1) ∨(( p_1) p_2)).Here 𝐴𝑡(Ψ)={ p_3,a_2 p_1,⊤ p_1, ( p_1) p_2}, where a_2 p_1 is a state formula and the rest are path formulae. The formula p_3 occurs negatively in Ψ and the rest of the formulae in 𝐴𝑡(Ψ) occur positively in Ψ. § GAME-THEORETIC SEMANTICS In this section we definebounded, finitely boundedand unbounded evaluation games for .These games give rise to three different systems of semantics, namely, the bounded, finitelybounded and unboundedfor .These systems of semantics were defined for plain already in <cit.>. The principal difference between the bounded and unbounded is that the bounded variant forces games toend after a finite number of steps. This is a significant difference achieved, as we shall see, via requiring the players to choose ordinal numbers that can intuitively be considered to determine upper bounds for game durations (see also Example <ref>). In the unbounded semantics, no such ordinals are used, and the games can continue for infinitely many rounds.As explained in <cit.>, the difference between bounded and unbounded semantics is directly analogous to the difference between for-loops andwhile-loops. Indeed, for-loops require an extra parameter that determines the number of loop iterations, and while-loops can possibly loop infinitely long.Having both the bounded and unbounded semantics at our disposal will prove beneficial in Section <ref> where we discuss model checking. Indeed, we shall need the unbounded semantics for connecting fragments of to Büchi games and thereby obtaining novel tractability results. On the other hand, we shall need the bounded semantics for our proof strategy of Theorem <ref> which confirms the PSPACE-completeness of model checking. The unbounded and bounded semantics will be proved equivalent below.The finitely bounded semantics is not equivalent to these two. The differencebetween the finitely bounded and bounded semantics is that the parameterswith which the players force the games to be finite are possibly infiniteordinals in bounded semantics and finite ordinals in finitely bounded semantics. The finitely bounded and bounded semantics are equivalent over finite models but not over infinite ones. The reason for introducing finitely bounded semantics is that it provides a novel, interesting perspective on and while still beingequivalent over finite (but not infinite) models with the standard semantics.Below we shall use some terminology and notational conventions introduced in<cit.>.§.§ Evaluation games: informal descriptionGiven aℳ,a stateand a state formula φ, the evaluation game (ℳ, , φ) is, intuitively,a formal debate between two opponents, Eloise (E)and Abelard (A), about whether the formula φ is true at the statein the model ℳ. Eloise claims that φ is true, so she(initially) adopts the role of a verifier in the game, and Abelard tries to prove the formula false, so he is (initially) the falsifier.These roles (verifier, falsifier) can swap in the course of the game when negations are encountered in the formula. If ∈{ E, A},thendenotes the opponent of ,i.e., ∈{ E, A}∖{}. We now provide an intuitive account of the bounded evaluation game and the bounded for .The intuitions underlying the finitely bounded and unboundedare similar.A reader unfamiliar with the concept of may find it useful to consult, for example, <cit.> for in general and <cit.> or <cit.> for -specific . The for presented here follows thegeneral principles of , with the main originalfeature being the treatment of strategic formulae AΦ.We first give an informal account ofthe way such formulae are treated in our evaluation games. Formal definitions and some concrete examples will be given further, beginning from Section <ref>.The evaluation of formulae of the type AΦ in a given model is based on constructing finite paths in that model.The following two ideas are central. Firstly, the path formula Φ in AΦ can be divided into goals for the verifier (), these being the relative atoms ψ∈𝐴𝑡(Φ) that occur positively in Φ, and goals for the falsifier (), these beingthe relative atoms ψ∈𝐴𝑡(Φ) that occur negatively in Φ. (Some formulae may be goals for both players.) For simplicity, let us assume for now that Φ is in negation normal form and all the atoms in 𝐴𝑡(Φ)are temporal formulae of the type p.Then the verifier's goals are eventuality statements p, while the falsifier's goals are statements p' that occur negated; note that the negation of p' is equivalent to the safety statement p'. The verifier wishes to verify her/his[The genders of the players may be assigned randomly below at points when this causes no ambiguities and streamlines the presentation.] goals. The falsifier, likewise, wants to verify her/his goals, i.e., the falsifier wishes to falsify the related safety statements. Secondly, every temporal goal associated with AΦ has a unique “finite determination point” on any given path where that goal can be verified bythe player to whom the goal belongs. This means the following.If a goal p of the verifier is true on an infinite pathπ, then there necessarily exists an earliest point q on that path where the fact that pholds onπ becomes verified simply because p is true at q. Indeed, the first point of π where p is true is the finite determination point q of p.Once p has been verified, it will remain true on π,no matter what happens on the path after q.Similarly, concerning falsifier's goals, if p' is false (and thus p' true) on an infinite path π', there is a unique point where p' first becomes falsified, that point being the first state q' of π' where p' is true. That point q' is the finite determination point of the goal p' of the falsifier. Furthermore, p' will remain false on the path no matter what happens further.(Note that there is no analogous finite determination point for -formulae such as Ap on a given infinite path. Note also that we discussed only the simple temporal goalsp and p' for simplicity, but every temporal goal—as long as it can be verified bythe player to whom the goal belongs—does indeed have a finite determination point. This will become clear below.)Now, the game-theoretic evaluation procedure of an -formulaAΦ proceeds roughly as follows.The verifier is controlling the agents in the coalition A and the falsifier controls the agents in the opposing coalition A=∖ A.The players start constructing a path. (Each transition from one state to another is carriedout according to the process “𝖲𝗍𝖾𝗉𝗉𝗁𝖺𝗌𝖾" defined formally in Section <ref>.) The verifier is first given a chance to verify some of her/his goals in Φ.The falsifier tries to prevent this and to possibly verify some of her/his own goals instead. During this path construction/verification process, the verifier is said to have the role of the seeker. A player is allowed to stay as the seeker for only a finite number of rounds. This is ensured by requiring the seeker to announce an ordinal[To see why finite ordinals do not suffice ingeneral relates to infinite branching. See, e.g., Example 3.11 of <cit.> for details.], called timer[Note that the term “timer” is used here differently from <cit.>. ],before the path construction process begins, and then lower the ordinal each time a new state is reached. The process ends when the ordinal becomes zero or when the seekeris satisfied, having verified some of her goals.Since ordinals are well-founded, the process must terminate. After the verifier has ended her/his seeker turn, the falsifier may either end the game or take the role of the seeker.If (s)he decides to become the seeker, then (s)he sets a new timer and the path constructionprocess continues for some finite number of rounds. When the falsifier is satisfied, having verified some of her/his goals, the verifier may again take the seeker's role, and so on. Thus, the verifier and falsifier take turnsbeing the seeker, trying to reach (verify) their goals.The number of these alternations is bounded by a seeker turn counter which is a finite number that equals the total number of goals in Φ.(The formal description of seeker turn alternation is given in the clause “𝖣𝖾𝖼𝗂𝖽𝗂𝗇𝗀𝗐𝗁𝖾𝗍𝗁𝖾𝗋𝗍𝗈𝖼𝗈𝗇𝗍𝗂𝗇𝗎𝖾𝖺𝗇𝖽𝖺𝖽𝗃𝗎𝗌𝗍𝗂𝗇𝗀𝗍𝗁𝖾𝗍𝗂𝗆𝖾𝗋" in Section <ref>.)Each time a goal in Φ becomes verified, this is recorded in a truth function T. (The recording of verified goals is described formally in the process “𝖠𝖽𝗃𝗎𝗌𝗍𝗂𝗇𝗀𝗍𝗁𝖾𝗍𝗋𝗎𝗍𝗁𝖿𝗎𝗇𝖼𝗍𝗂𝗈𝗇" defined in Section <ref>.)The truth function carries the following information at any stage of the game: * The verifier's goals that have been verified.* The falsifier's goals that have been verified.* All other goals remain open. When neither of the players wants to become the seeker, or when the seeker turn counter becomes zero, the path construction process ends and theplayers play a standard Boolean evaluation game on Φ by using the values given by T; the open goals are given truth values as follows:* The verifier's open goals are (so far) not verified and thus considered false. * Likewise, the falsifier's open goals are (so far) not verified and thus considered false. Recallhere that the falsifier's goals occur in the scope of a negation.Next we consider the conditions when a player is “satisfied” with the current status of the truth function T—and thus wants to end the game—and when (s)he is “unsatisfied” and wants to continue the game as the seeker.Note that when the path construction ends, then every goal is given a Boolean truth value based on the truth function T, as described above. With these values, the formula Φ is either true or false.If Φ is true with the current values based on T, then the verifier can win the Boolean game for Φ; dually, if Φ is not true with the values based on T, then the falsifier can win the Boolean game for Φ. Hence the players want to take the role of the seeker in order to modify the truth function T in such a way that the truth of Φ with respect to T changes from false to true (whenceis satisfied) or from true to false (whenceis satisfied). The truth value of Φ with respect to T can keep changing when T is modified, but only a finite number of changes is possible.Indeed, the maximum number of such truth alternations is the total number of goals in Φ. In the general case, formulae of the type φψ, φ and (state formulae) φ may also occur in 𝐴𝑡(Φ) as goals, and Φ does not have to be in negation normal form. Formulae of the type φψ can be either verified, by showing that ψ is true, or falsified, by showing that φ is not true at related states. State formulae φ can only be verified at the initial state and the next-state-formulae φ can only be verified at the second state on the path traveled.§.§ Evaluation games: formal description Now we will present the bounded evaluation game which uses the bounded transition game as a subgame for evaluating strategic subformulae. Interleaved with the definition we will provide, in italics, a running example that uses ℳ^* and φ^* from Examples <ref> and <ref> respectively. §.§.§ Rules of the bounded evaluation gameLet ℳ = (, , , , d, o, v) be a , ∈ a state, φ a state formula and Γ>0 an ordinal called a timer bound.The -bounded evaluation game(ℳ, , φ,Γ)between the players A and Eis defined as follows.A location of the game is a tuple (,q,ψ,T) where∈{ A, E}, q∈ is a state, ψ is a subformula of φand T is a truth (history) function, mapping some subset of (φ) into {⊤,,}.[We note here that the values of T are only modified during transtion games and that T is always a total function for all subformulae of φ that are relevant for the transition game that is played.] The initial location of the game is ( E,,φ,T_in), where T_in is the empty function. In every location (,q,ψ,T), the playeris called the verifier andthe falsifier for that location.Intuitively, q is the current state of the game and T encodes truth values of formulae on a path that has been constructed earlier in the game.Each location is associated with exactly one of the rules 1–6 given below.First we provide the rules for locations (,q,ψ,T) where ψ is either a proposition symbol or has a Boolean connective as its main operator:1. A location (,q,p,T), where p∈, is an ending location of the evaluation game.If T ≠∅, thenwins the game if T(p) = ⊤ and elsewins.Respectively, if T = ∅, thenwins if q∈ v(p) and elsewins.2. From a location (,q,ψ,T) the game moves to the location (,q,ψ,T).3. In a location (,q,ψ∨θ,T) the playerchooses one of the locations (,q,ψ,T) and (,q,θ,T), which becomes the next location of the game. We then define the rules of the evaluation game for locations with strategic formulae as follows. 4. Suppose a location (,q,AΦ,T) is reached.* If T ≠∅, then this location is an ending location wherewins if T(AΦ)=⊤ and elsewins.* If T= ∅,then the evaluation game enters a transition game g(,q,AΦ,). The transition game is a subgame to be defined later on.The transition game eventually reaches an exit location (',q',ψ,T'), and the evaluation game continues from that location. Note that an exit location only ends the transition game, so exit locations of transition games and ending locations of the evaluation game are different concepts. The rules corresponding to the temporal connectives are defined using the truth function T (updated in an earlier transition game) as follows.5. A location (,q,φψ,T) is an ending location of the evaluation game.wins if T(φψ) =⊤ and elsewins. 6. Likewise, a location (,q,φ,T) is an ending location. wins if T(φ) = ⊤ and otherwisewins. These are the rules of the evaluation game. We note that the timer boundwill be used only in transition games. If =ω, we say that the evaluation game is finitely bounded. Theinitial location of the finitely bounded evaluation game𝒢(ℳ^*,q_0,φ^*,ω)(see Examples <ref> and <ref>) is ( E,q_0,a_1Ψ,∅), from where the transition gameg( E,q_0,a_1Ψ,ω) begins.Consider the evaluation game 𝒢^*:=𝒢(ℳ^*,q_0,φ^*,ω) (see Examples <ref> and <ref>). The game begins from the initial location ( E,q_0, p_2∧ (p_1∨a_1Ψ),∅). It is easy to see that the rule for the conjunction is that the falsifier gets to choose either of the conjuncts. So, here Abelard may first choose the next location of the game to be ( E,q_0, p_2,∅) or ( E,q_0,p_1∨a_1Ψ,∅). If Abelard chooses the first option, then the next location is ( A,q_0,p_2,∅), where Abelard loses, since q_0∉ v(p_2). Suppose now that Abelard chooses the second option. Now Eloise gets tochoose the next location of the game to be ( E,q_0,p_1,∅) or ( E,q_0,a_1Ψ,∅). If Eloise would choose the first, she would lose immediately since q_0∉ v(p_1). So, suppose Eloise chooses the second option. Then the transition game g( E,q_0,a_1Ψ,ω) begins. §.§.§ Rules of the transition game Recall that transition games are subgames of evaluation games. Their purpose is to evaluate the truth of strategic subformulae, in a game-like fashion.Now we give a detailed description of transition games.[A transition game foris similar to the `embedded game'introduced in <cit.> for the of . The role of the seekerhere is similar to the role of the controller in that embedded game.]A transition gameg(,q_0,AΦ,),where ∈{ A, E}, q_0∈, AΦ∈ and >0 is an ordinal, is defined as follows. is called the verifier in the transition game. The game g(,q_0,AΦ,Γ) is based on configurations, i.e.,tuples (,q,T,n,γ,x), wherethe player ∈{𝐄,𝐀} is called the seeker;q is the current state;T:𝐴𝑡(Φ)→{⊤,,𝗈𝗉𝖾𝗇} is atruth function;n∈ℕ is a seeker turn counter (n≤ |𝐴𝑡(Φ)|);γ is an ordinal called timer;and x∈{ i,ii,iii } is an index showing the current phase of the transition game. The game g(,q_0,AΦ,) begins at the initial configuration(,q_0,T_0,|𝐴𝑡(Φ)|,,i),with T_0(χ) = for all χ∈𝐴𝑡(Φ). The transition game g( E,q_0,a_1Ψ,ω) begins from the initial configuration ( E,q_0,T_0,4,ω,i), since |𝐴𝑡(Ψ)|=4. (Note that the timer is initially ω in transition games occurring within finitely bounded evaluation games, but the timer will always have a finite value thereafter.)The transition game then proceeds by iterating the phases i, ii and iii, which we first describe informally; detailed formal definitions are given afterwards. i. 𝖠𝖽𝗃𝗎𝗌𝗍𝗂𝗇𝗀𝗍𝗁𝖾𝗍𝗋𝗎𝗍𝗁𝖿𝗎𝗇𝖼𝗍𝗂𝗈𝗇:In this phase theplayers make claims on the truth of state formulae at the current state q. Ifmakes some claim, then the opponentmay either: 1) accept the claim, whence truth function is updated accordingly, or 2) challenge the claim. In the latter case the transition game endsand truth of the claim is verified in a continued evaluation game. ii. 𝖣𝖾𝖼𝗂𝖽𝗂𝗇𝗀𝗐𝗁𝖾𝗍𝗁𝖾𝗋𝗍𝗈𝖼𝗈𝗇𝗍𝗂𝗇𝗎𝖾𝖺𝗇𝖽𝖺𝖽𝗃𝗎𝗌𝗍𝗂𝗇𝗀𝗍𝗁𝖾𝗍𝗂𝗆𝖾𝗋:Here the current seekermay either continue her seeker turn and lower the value of the timer, or end her seeker turn. Ifchooses the latter option, then the opponentof the seeker may either 1) take the role of the seeker and announce a new value for the timer or 2) end the transition game, whence the formula Φ is evaluated based on current values of the truth function. iii. 𝖲𝗍𝖾𝗉𝗉𝗁𝖺𝗌𝖾:Here the verifierchooses actions for the agents in the coalition in A at the current state q. Thenchooses actions for the agents in the opposing coalition A. After the resulting transition to a new state q' has been made, the game continues again with phase i.We now describe the phases i, ii and iii in technical detail:i. 𝖠𝖽𝗃𝗎𝗌𝗍𝗂𝗇𝗀𝗍𝗁𝖾𝗍𝗋𝗎𝗍𝗁𝖿𝗎𝗇𝖼𝗍𝗂𝗈𝗇.Suppose the current configuration is (,q,T,n,γ,i).Then the truth function T is updated by considering, one by one, each formula χ∈𝐴𝑡(Φ) in some fixed order[We will see that the order here is irrelevant for the existence of winning strategies in the evaluation game. This is simply because the player with a winning strategy can make all the claims that are true and oppose all the other claims—regardless of the order in which the formulae are considered.]. If T(χ)≠, then the value χ cannot be updated. Else the value of χ may be modified according to the rules A – C below. A. Updating T on temporal formulae with : Supposethat φψ ∈At(Φ). Now first the verifiermay claim that ψ is true at the current state q.Ifmakes that claim, thenchooses either of the following:*accepts the claim of , whence the truth function is updated so thatφψ is assigned value ⊤ (φψ becomes verified), hereafter indicated by φψ ↦⊤. *challenges the claim of , whence the transition game ends at the exit location (,q,ψ,∅).(We note that, here and further, when a transition game ends, the evaluation gamecontinues from the related exit location and the evaluation game will never return to the same exited transition game again.)Ifdoes not claim that ψ is true at q, thenmay make that same claim (that ψ is true at q).Ifmakes that claim, then the same two steps above concerning accepting and challengingare followed, but withandswapped everywhere.Suppose then that neither of the players claims that ψ is true at q. Then firstcan claim that φ is false at q. Ifmakes that claim, thenchooses either of the following:*accepts the claim, whence the truth function is updated so that φψ↦ (φψ becomes falsified).*challenges the claim, whence the transition game ends at the exit location (,q,φ,∅).Ifdoes not claim that φ is false at q, thenmay make that claim. If he does, then the same steps as those above are followed, but withandswapped. B. Updating T on proposition symbols and strategic formulae: Thetruth function can be updated on proposition symbols p∈𝐴𝑡(Φ) and formulae A'Ψ∈𝐴𝑡(Φ) only when the phase i is executed for the first time (so, q = q_0). In this case, given such a formula χ, firstcan claim that χ is true at q.Now, ifaccepts this claim,then the truth function is updated s.t. χ↦⊤. Ifchallenges the claim,then the transition game ends at the exit location (,q,χ,∅).Ifdoes not claim that χ is true at q, thenmay make that claim. If he does, then the same steps are followed, but withandswapped.C. Updating T on formulae with : Thetruth function can be updated on formulae of type ψ∈𝐴𝑡(Φ) only when phase i is executed for the second time in the transition game(so, q is some successor of q_0).Firstcan claim that ψ is true at q. Ifaccepts that claim,then the truth function is updated s.t.ψ↦⊤.Ifchallenges the claim,then the transition game ends at the exit location (,q,ψ,∅).Ifdoes not claim that ψ is true at q, thencan make that claim. If he does, the same steps are followed, but withandswapped.Note that in points B and C, the formulae cannot be mapped toby the truth function T. But if these formulae are left with the value , then they will be considered false by default if the transition game ends in stage ii (and the boolean game is played). Intuitively this is because if no player has claimed these formulae to be true, then players have agreed that they are indeed false.If neither player makes any claim which would update the value of a formula χ∈𝐴𝑡(Φ), then the value of χ is left . Once the values of the truth function T have been updated (or left as they are) for all formulae in 𝐴𝑡(Φ), a new truth function T' is obtained. The transition game then moves to the new configuration (,q,T^',n,γ,ii). In the configuration ( E,q_0,T_0,4,ω,i) the players begin adjusting T_0 for which initially T_0(χ)= for every χ∈𝐴𝑡(Ψ). Since it is the first round of the transition game, the value of p_3 cannot be modified, but the value of a_2 p_1 can be modified. Suppose that Eloise claims that a_2 p_1 is true at q_0. Now Abelard could challenge the claim, whence the transition game ends and the evaluation game continues from location ( E,q_0,a_2 p_1,∅) (which leads to a new transition game g( E,q_0,a_2 p_1,ω)). Suppose Abelard does not challenge the claim. Then a_2 p_1 is mapped to ⊤. Since p_1 and ( p_1) p_2 occur positively in Φ, Eloise has interest only to verify them and Abelard has interest only to falsify them. Eloise could verify p_1 by claiming that p_1 is true, or verify ( p_1) p_2 by claiming that p_2 is true. But,if Eloise makes either of these claims, then Abelard wins the whole evaluation game by challenging, since q_0∉ v(p_1)∪ v(p_2). Suppose that Eloise does not make any claims. Now, Abelard could claim that p_1 is not true, in order to falsify ( p_1) p_2. But if he does that, he loses the evaluation game if Eloise challenges, since q_0∉ v(p_1). Suppose that Abelard does not make any claims either. Then the transition game proceeds to configuration ( E,q_0,T,4,ω,ii), where T(a_2 p_1)=⊤ and T(χ)= for the other χ∈𝐴𝑡(Ψ).ii. 𝖣𝖾𝖼𝗂𝖽𝗂𝗇𝗀𝗐𝗁𝖾𝗍𝗁𝖾𝗋𝗍𝗈𝖼𝗈𝗇𝗍𝗂𝗇𝗎𝖾𝖺𝗇𝖽𝖺𝖽𝗃𝗎𝗌𝗍𝗂𝗇𝗀𝗍𝗁𝖾𝗍𝗂𝗆𝖾𝗋.Suppose a configuration (,q,T,n,γ,ii) has been reached.Assume first that γ≠ 0. Then the seekercan choose whether to continue the transition game as the seeker.If yes, thenchooses some ordinal γ'<γ and the transition game continues from (,q,T,n,γ',iii).Ifdoes not want to continue, or if γ = 0, then one of the following applies. (a) Suppose that n≠ 0. Then the player chooses whether she wishes to continue the transition game.If yes, thenchooses an ordinal γ'<(so,in fact resets the timer value) and the transition game continues from (,q,T,n-1,γ',iii).Otherwise the transition game ends at the exit location (,q,Φ,T). (b) Suppose that n=0. Then the transition game ends at the exit location (,q,Φ,T).In ( E,q_0,T,4,ω,ii) Eloise may decide whether to continue the transition game as the seeker. Suppose that Eloise does not continue, whence Abelard may now become the seeker and continue the transition game, or end it. If Abelard ends the transition game, then the evaluation game is continued from ( E,q_0,Ψ,T). But because T( p_3)= and T(a_2 p_1)=⊤, Eloise can then win the evaluation game by choosing the left disjunct of Ψ (recall that with these values of T Eloise is then guaranteed to win). Suppose thus that Abelard decides to become the seeker, whence he chooses some m<ω and the next configuration is ( A,q_0,T,3,m,iii).iii. 𝖲𝗍𝖾𝗉𝗉𝗁𝖺𝗌𝖾 [The procedure in this phase is analogous to the step game,(,A,q), which was introduced for the for (<cit.>).]Suppose that the configuration is(,q,T,n,γ,iii).(a) First,chooses an action α_i∈ d(a_i,q) for each a_i∈ A.(b) Then,chooses an action α_i∈ d(a_i,q) for each a_i∈A.The resulting action profile produces a successor state q':=o(q,α_1,…,α_k). The transition game then moves to the configuration(,q',T,n,γ,i). In the configuration ( A,q_0,T,3,m,iii) Eloise (who is the verifier ) first chooses action for agent a_1, then Abelard chooses action for agent a_2, which produces either successor stateq_1 or q_2. Then the transition game continues from the configuration ( A,q_j,T,3,m,i), where j∈{1,2}.This concludes the definition of the rules for the phases i, ii and iii in the transition game g(,q_0,AΦ,). Suppose first that the transition game is continued from ( A,q_2,T,3,m,i).Since it is the second round, Abelard could now try to verify p_3 by claiming that p_3 is true at q_2. However, then Eloise would win by challenging. But if Abelard does not try to verify p_3 now, then the value of p_3 will stay . In that case Eloise will win the evaluation game simply by not making any more claims in the transition game. Suppose then that the game continues from ( A,q_1,T,3,m,i).Suppose that Abelardverifies p_3 by claiming that p_3 is true and that Eloise does not challenge. If the transition game now ended at ( E,q_1,Ψ,T') with T'( p_3)=⊤, Abelard would win. Thus, suppose that Abelard ends his seeker turn and Eloise chooses some finite timer, say 2. At ( E,q_1,T',2,2,iii) Eloise can force the resulting state q_3 by choosing α for a_1. At ( E,q_3,T',2,2,i) Eloise can verify ( p_1) p_2 by claiming that p_2 is true at q_3. Furthermore, Eloise can move via q_1 to q_4 and verify p_1 there, before timer reaches 0. When the evaluation game is eventually continued, Eloise wins by choosing the right disjunct of Ψ. Suppose that the transition game continues from the configuration ( A,q_2,T',3,m,i). Since it is the second round of the transition game, Abelard could now try to verify p_3 by claiming that p_3 is true at q_2. However, then Eloise could win by challenging this claim. But if Abelard does not try to verify p_3 at that configuration, then the value of p_3 will stay . Hence, when Abelard decides to end his seeker'sturn or when the timer m is lowered to 0, then Eloise may end the transition game and win the evaluation game from a location of the form ( E,Ψ,q',T”).Suppose now that the transition game continues from the configuration ( A,q_1,T',3,m,i). Suppose that Abelard verifies p_3 by claiming that p_3 is true and that Eloise does not challenge that claim. If the transition game now ended at location ( E,q_1,Ψ,T”), where T”( p_3)=⊤, Abelard would win. Thus, if Abelard decides to quit the transition game, then Eloise wants to continue as a seeker from configuration ( E,q_1,T”,2,m',iii) for some m'<ω. Then Eloise can choose action α for agent a_1 and lower the timer to 2, whence the next configuration is ( E,q_3,T”,2,2,i). Eloise can then verify ( p_1) p_2 at it by claiming that p_2 is true at q_3. Furthermore, Eloise can move via q_1 to q_4 and verify p_1 there, before the timer reaches 0. Then Eloise will win when the evaluation game is continued from a location of the form ( E, q_4,Ψ,T”'). §.§.§ The unbounded evaluation gameLet (ℳ, q, φ,Γ)be a -bounded evaluation game. We can define a corresponding unbounded evaluation game, (ℳ, q, φ), by replacing transition games g(,q,AΦ,) with unbounded transition games, g(,q,AΦ); these are played with the same rules as g(,q_0,AΦ,Γ)except that timers γ are not used in them. Instead, the players can keep the role of a seeker for arbitrarily long and thus the game may last for an infinite number of rounds. In the case of an infinite play, the player who took the last seeker turn loses the entire evaluation game. (Recall that the number of seeker alternations is bounded by the number |𝐴𝑡(Φ)|.)§.§ Defining the game theoretic semantics In this section we define game-theoretic semantics for by equating truth of formulae with the existence of a winning strategy for Eloise in the corresponding evaluation game.We begin with the following remark which will be relevant for the notion of positional strategies in evaluation games. The description of transition games above is based on a simplified notion of configurations. The phases i–iii consist of several “subphases” and more information should be encoded into configurations.The full notion of configurationshould also include: – In phase i, a counter indicating the relative atom currently under consideration by the players; flags for each player indicating whether and what claim (s)he has made on the truth of the current relative atom; a 3-bit flag indicating if it is the first, second, or some later round in the transition game. – For phase ii, a flag whether the current seeker wants to continue, and for phase iii, a record of the current choice of actions for the agents in A by .For technical simplicity, we omit these formal details. Hereafter a position in an evaluation game will mean either a location of the form (,q,φ,T) or a configuration in the fully extended form described in the remark above. By this definition, at every position only one of the players (Abelard or Eloise) has a move to choose. Thus, the entire evaluation game—including transition games as subgames—is a turn-based game of perfect information.By game tree T_𝒢 of an evaluation game 𝒢, we mean the tree whose nodes correspond to all positions arising in 𝒢, and every branch of which corresponds to a possible play of 𝒢 (including transition games as subgames). Note that some of these plays may be infinite, but only because an embedded transition game does not terminate, in which case a winner in the entire evaluation game is uniquely assigned according to the rules inSection <ref>.The formal definitions of players' memory-based strategies in the evaluation games games are defined as expected, based on histories of positions. As usual, a strategy for a playeris calledwinning if, following that strategy,is guaranteed to win regardless of howplays. A strategy is positional if it depends only on the current position.We can also define strategies for transition games that arise within evaluation games; note that these are substrategies for the strategies in evaluation games. A strategy τ for a transition game is called winning forif * every exit location that can be reached with τ is a winning location forin the evaluation game that continues from the exit location, and additionally,* in the alternative scenario where the transition game continues infinitely long while τ is followed (which is possible only in unbounded games), the playeris not the player who holds the (necessarily last) seeker's turn that lasts infinitely long.Let ℳ be a , q∈, φ∈ andan ordinal.Truth of φ in the -bounded(⊩_), resp. unbounded (⊩)is defined as follows:ℳ,q⊩_φ (resp. ℳ,q⊩φ) iff Eloise has a positionalwinning strategy in 𝒢(ℳ,q,φ,) (resp. 𝒢(ℳ,q,φ)). We will show later that evaluation games are determined with positional strategies. Hence, if we allowed perfect-recall strategies in the truth definition above, we would obtain equivalent semantics.Consider the ℳ = (, , , , d, o, v), where:= {1,2}, = {q_0, q_1, q_2}, = {p_1,p_2}, = {α, β} d(1,q_0)= d(2,q_1) = {α, β}; d(a,q_i)={α} o(q_0,βα)=q_0, o(q_0,αα)= o(q_1,αβ)=q_1, o(q_1,αα)=o(q_2,αα)=q_2 v(p_1)={q_0}andv(p_2)={q_2}. [scale=1.5] at (-0.3,0) [draw, circle] (0) p_1; at (1,0) [draw, circle] (1) p_3; at (2.3,0) [draw, circle] (4) p_2; at (-1.8,0) ℳ:; at (-0.3,-0.45) q_0; at (1.05,-0.45) q_1; at (2.35,-0.45) q_2; at (1.35,0.5) αβ; [-latex, loop left] (0) to node βα (0); [-latex] (0) to node[below] αα (1); [-latex, loop above] (1) to (1); [-latex] (1) to node[below] αα (4); [-latex, loop right] (4) to node αα (4); Here we have ℳ = (, , , , d, o, v), where = {a_1,a_2}, = {p_1,p_2}, = {q_0, q_1, q_2}, = {α, β}, andthe transition, outcome and valuation functions are defined as above. Let φ:=a_2( p_1 ∨p_2) (here p_1 =p_1). We describe a winning strategy for Eloise in the unbounded evaluation game (ℳ,q_0,φ).Eloise immediately ends her seeker's turn and does not make claims while being at q_0. If Abelard makes claims at q_0, she challenges those claims. If Abelard ends the transition game at q_0, Eloise wins the evaluation game by choosing p_1, as now the value of p_1 is . Suppose that Abelard forces a transition to q_1 by choosing α for a_1. If he claims p_1 is true at q_1, Eloise does not challenge. If Abelard ends his seeker turn at q_1, Eloise becomes the seeker. At q_1 she forces a transition to q_2, by choosing α for a_2. Then she verifies p_2 by claiming that p_2 is true at q_2.If the transition game ends at q_2, she wins by choosing p_2, whose value is ⊤.Note that by following this strategy, Eloise cannot stay as a seeker for infinitely long. We will see later that there is never need for a larger than |At(Φ)| number of seeker alternations in a transition game for a formula AΦ. In Example <ref> we saw that there are cases where exactly |At(Φ)| seeker alternations are needed in the corresponding transition game. The following example generalizes the setting of Example <ref> by showing that no fixed upper bound for the number of seeker alternations suffices for all transitions games. Let φ_k=a_2Ψ_k, where Ψ_k:= r_0∨⋁_1≤ i≤ k( p_i∧ r_i). Consider the followingℳ (c.f. the model in Example <ref>). [ scale=1.5, state/.style=draw, rectangle, rounded corners, font= ] at (0,0) [state] (0) r_0,…, r_n; at (1.5,0) [state] (1) r_1,…, r_n; at (3.2,0) [state] (2) p_1, r_1,…, r_n; at (4.8,0) [state] (3) r_2,…, r_n; at (6.5,0) [state] (4) p_2, r_2,…, r_n; at (5,-1.5) [state] (5) p_n-1,r_n-1,r_n; at (3.4,-1.5) [state] (6) r_n; at (2.1,-1.5) [state] (7) p_n,r_n; at (0.7,-1.5) [state] (8) p_n; [below=4pt of 0] q_0; [below=4pt of 1] q_1; [below=0pt of 2] q_1'; [below=4pt of 3] q_2; [below=0pt of 4] q_2'; [below=0pt of 5] q_n-1'; [below=4pt of 6] q_n; [below=0pt of 7] q_n'; [below=4pt of 8] q_fin; [-latex, loop above] (0) to node βα (0); [-latex] (0) to node[below] αα (1); [-latex, loop above] (1) to node αβ (1); [-latex] (1) to node[below] αα (2); [-latex, loop above] (2) to node βα (2); [-latex] (2) to node[below] αα (3); [-latex, loop above] (3) to node αβ (3); [-latex] (3) to node[below] αα (4); [-latex, loop above] (4) to node βα (4); [-latex, dashed] (4) to (5); [-latex, loop above] (5) to node βα (5); [-latex] (5) to node[below] αα (6); [-latex, loop above] (6) to node αβ (6); [-latex] (6) to node[below] αα (7); [-latex, loop above] (7) to node βα (7); [-latex] (7) to node[below] αα (8); [-latex, loop left] (8) to node αα (8); At q_0 Eloise wants to end her seeker turn immediately as r_0 “still” true. When Abelard becomes the seeker, he wants to make a transition to q_1 and falsify r_0 there. Since Abelard has then no reason to continue as a seeker, he gives the seeker turn to Eloise. Now Eloise wants to make a transition to q_1' in order to verify p_1; since r_1 is still true, Eloise has then no reason to continue as a seeker. We may suppose that the transition game continues like this, so that the seeker role is swapped after every transition and p_i are verified while r_i are falsified. When Abelard finally becomes the seeker at q_n', the maximum number of |At(Ψ_k)|=2k+1 seeker alternations has been used. Then Abelard makes a transition to q_n', falsifies r_n and wins the “boolean game” for Ψ_k with the values of the (fully updated) truth function. § ANALYSING EVALUATION GAMESIn this section we will analyse the properties of the evaluation games of . We first prove positional determinacy of bothbounded and unbounded evaluation games. Then we find so-called stable timer bounds for bounded evaluation games and show that with them, the bounded becomes equivalent to the unbounded . Finally we present the notion of a regular strategy which will be needed for proving the equivalence of and the standard compositional semantics of in the next section. §.§ Positional determinacy Here we prove positional determinacy of both bounded and unbounded evaluation games. Recall here that positions are either locations in evaluation games or configurations in transition games—in the extended sense which was discussed in Remark <ref>. Bounded evaluation games are determined and the winner has a positional winning strategy. (Sketch)Since ordinals are well-founded and they must decrease during transition games, it is easy to see that the game tree is well-founded. Thus positional determinacy follows easlily, essentially by backward induction.Unbounded evaluation games are determined and the winner has a positional winning strategy.(Sketch) This claim can be provedin a similar way as Gale-Stewart theorem. Another way to prove the claim is to show that unbounded evaluation games are essentially Büchi-games (see, e.g., <cit.> for Büchi-games). The details of the proof via Büchi-games are in <cit.>, but the principal idea is to set up a Büchi condition such that Eloise wins the Büchi game if the set of positions visited infinitely often is included in the union of configurations of the transition games where Abelard is the seeker and positions of the evaluation game where Eloise has already won.We will show that unbounded evaluation games are essentially Büchi-games (see, e.g., <cit.>). We first discuss the case where the underlying ℳ is finite. We follow the technicalities for Büchi-games from <cit.>, which gives an excellently detailed and to-the-point presentation of the related basic notions.Take a triple (ℳ,q,φ), where ℳ is a finite , q a state of ℳ,and φ a formula of . We will convert this triple into a Büchi game BG suchthat ℳ,qφ iff player 2 has a winning strategy in BG from a certain position of BG determined by the state q.The required Büchi gameBG corresponds almost exactly to the unbounded evaluation game 𝒢(ℳ,q,φ). The set of states of BG is the finite set of positions in 𝒢(ℳ,q,φ). The states of BG assigned to player 1 (resp., player 2) of BG are the positions where Abelard (resp., Eloise) is to move.The edges of the binary transition relation E of BG correspond to thechanges of positions in 𝒢(ℳ,q,φ).Also, E is defined such that ending locations in the evaluation game connect (only) to themselves via E. This ensures that every state of BG has a successor state.We set a co-Büchi-objective such that an infinite play of BG is winning for player 2 iff the set of states visitedinfinitely often is a subset of the union of the following sets of states of BG: * States of BG corresponding to configurations of the transition games where Abelard is the seeker. * Statesof BG corresponding to such ending locations inthe game (ℳ,q,φ) where Eloise has already won.Clearly, Eloise (resp., Abelard) has apositional winning strategy in the evaluation game starting at a position 𝑝𝑜𝑠 ofthe evaluation game iff player 2(resp., player 1) in BG has a positional winning strategy from the state of BG corresponding to 𝑝𝑜𝑠.Finite Büchi games enjoy positionaldeterminacy (see e.g. <cit.>), which completes the case of finite s. For infinite s, the argument is the same but requires positional determinacy of Büchi games on infinite game graphs. That fact is well-known and follows easily from Theorem 4.3 of <cit.>. Let 𝒢(ℳ,,φ) be an unbounded evaluation game.Note that positions of an evaluation game form a tree of a finite depth. We prove the claim by induction on the positions of 𝒢(ℳ,,φ) in that tree. The only nontrivial case is when a position leads to an unbounded transition game g(,q_0,AΦ). We make the inductive hypothesis that every possible exit position of g(,q_0,AΦ) is a winning position for either of the players.We now do inner induction proof on the number n=𝐴𝑡(Φ) and show that any configuration in g(,q_0,AΦ) that is of the form c=(,,q,T,n,x) is a winning configuration for either of the players. We suppose that c is not a winning configuration for . Now it is easy to see thatcan play in such a way that the next configuration c' is not a winning position for . By induction on the length of the transition game, we can show thathas such a strategy τ, that when (s)he follows τ, one of the following holds: * The transition game ends at an exit position that is not a winning position for .*decides to stop being a seeker at some configuration that is not a winning position for .* The transition game lasts for infinite number of rounds.If 1 holds, then by the (outer) inductive hypothesis, the exit position of the game is a winning position for . If 2 holds, then by the (inner) inductive hypothesis, the next configuration is a winning position for . And if 3 holds, thenwins sincewas the seeker. Hence the initial configuration c is a winning position for .By the positional determinacy, we have the following consequence: If Eloise (Abelard) has a perfect recall strategy in a bounded or unbounded evaluation game (or transition game), then she (he) has a positional winning strategy in that game.§.§ Finding stable timer bounds In this section study which timer bounds are “stable” for a given model. Intuitively this means that a timer boundis stable for a model ℳ if neither of the players can benefit from announcing timers that are higher than (or equal to) . We will see that, by finding stable timer bounds, we can make the bounded equivalent to the unbounded . Moreover, the identification of stable timer bounds for finite models will be necessary for our model checking proofs in Section <ref>. We next consider a “semi-bounded” variant of the transition game in which one player must use timers when being the seeker and the other is allowed to play without timers. A timer boundis stable for an unbounded transition game g(,q_0,AΦ) if the player with a winning strategy in g(,q_0,AΦ) can, in fact, win using timers below . We first identify stable timer bounds for finite models.Let ℳ be a finite , q_0∈ a state and Φ∈ a path formula. Then k:=||·|𝐴𝑡(Φ)| is a stable timer bound for g(,q_0,AΦ).We give a detailed sketch of proof.Let c=( E,q,T,n,x) be a configuration (for an unbounded game, so no timer is listed). Suppose that exit location (,q,Φ,T) is not a winning location for Eloise. Then she wants to stay as the seeker until the truth function is modified to T' that makes Φ true. Since T is updated state-wise, it is not beneficial for Eloise to go in loops such that T is not updated. Hence, if Eloise has a winning strategy from c, then she has a winning strategy in which T is updated at least once every || rounds. Since T can be updated at most |𝐴𝑡(Φ)| times, we see that a timer greater than k =||·|𝐴𝑡(Φ)| is not needed. If ℳ is a finite , the unbounded is equivalent on ℳ to the (||·|φ|)-bounded .In order to find stable timer bounds for infinite models, we give the following definition (cf. Def 4.12 in <cit.>).Let ℳ be a and let q∈. The branching degree of q, (q), is the cardinality of the set of outcome states from q:(q):=({o(q,α⃗)|α⃗∈(,q)}).The regular branching bound of ℳ, or (ℳ), is the smallest infinite regular cardinal κ such that κ > (q) for every q∈.Note that (ℳ)=ω if and only if ℳ is image-finite. If c=(,q,T,n,x) is a configuration in an unbounded transition game and γ is an ordinal, we use the notation c[γ]:=(,q,T,n,γ,x). Let ℳ be a , q_0∈ and Φ∈ a path formula. Then (ℳ) is a stable timer bound for g(,q_0,AΦ).Suppose first that Eloise has a winning strategy τ in g(ℳ,q_0,AΦ). Let c be any configuration of the form c=(, A,q,T,n,ii) such that * c can be reached with τ.* If Abelard decides to quit seeking at c, then τ instructs Eloise to become seeker.We need to find an ordinal γ_0<(ℳ) for Eloise to announce if she needs to become seeker at c and supplement τ with instructions on lowering the ordinal after every transition while she is a seeker. We will use the instructions given by τ for verifications and choices for actions.Suppose that Abelard quits seeking at c. Let T_g,c be the tree that is formed by all of those paths of confiqurations, starting from c, in which Eloise stays as the seeker and plays according to τ. Since τ is a winning strategy, every path in T_g,c must be finite, and thus T_g,c is well-founded. We prove the following claim by well-founded induction on T_g,c:For everyc'∈ T_g,c,there is an ordinal γ<(ℳ) s.t.c'[γ]is a winning position for Eloise.We choose γ=0 for every leaf on T_g,c. Suppose then that c' is not a leaf. By the inductive hypothesis, the claim holds for every configuration that can be reach with a transition from c'. We now define γ to be the successor of the supremum of these ordinals. Since (ℳ) is regular, we have γ<(ℳ).Then, there is γ_0<(ℳ) such that c[γ_0] is a winning configuration for Eloise.By using Proposition <ref>, it is now easy to show that when the regular branching bound of the given model is used as a timer bound , then the -bounded becomes equivalent to the unbounded .Suppose that ≥(ℳ). Then the unbounded is equivalent on ℳ to the -bounded .Suppose first that ℳ,q⊩φ. By Proposition <ref> Eloise can win the evaluation game using timers smaller thanwhen being the seeker. Hence clearly ℳ,q⊩_φ. Suppose then ℳ,q⊮φ. By Proposition <ref>, Abelard has a winning strategy in (ℳ,q,φ). Thus, by Proposition <ref>, Abelard can win (ℳ,q,φ) using timers smaller thanwhen being the seeker. Hence, Abelard clearly has a winning strategy in (ℳ,q,φ,) and thus ℳ,q⊮_φ. Consequently, finite timers suffice in image-finite models. However, the finitely bounded (with =ω) is not generally equivalent to the unbounded . See the following example.[C.f. Example 3.7 in <cit.>] Consider the image infinite concurrent game model ℳ which is displayed in the figure below. [ scale=0.55, prestate/.style=rectangle,draw=black!100,fill=black!20,thick,inner sep=3pt,minimum width=5mm,font=, state/.style=rectangle,rounded corners,draw=black!50,thick,inner sep=3pt,minimum width=5mm,font=, tb/.style=dashed,-latex, sb/.style=-latex, action/.style=font=, label/.style=font= ] at (0,3) [state] (s0) p; [label,above= 1pt of s0] s_0; at (-6,0) [state] (s10) p; [label,below= 1pt of s10] t_0; at (-3,0) [state] (s20) p; [label,below= 1pt of s20] t_1; at (0,0) [state] (s30) p; [label,below= 1pt of s30] t_2; at (3,0) [state] (s40) p; [label,below= 1pt of s40] t_3; at (6,0) [state] (s50) p; [label,below= 1pt of s50] t_4; at (7,1.5) ⋯; [sb] (s0) to node [action,left] 0,1 (s10); [sb] (s0) to node [action,right] 0,2 (s20); [sb] (s0) to node [action,right] 0,3 (s30); [sb] (s0) to node [action,right] 0,4 (s40); [sb] (s0) to node [action,right] 0,5 (s50); [sb] (s20) to node [action,below] 0,0 (s10); [sb] (s30) to node [action,below] 0,0 (s20); [sb] (s40) to node [action,below] 0,0 (s30); [sb] (s50) to node [action,below] 0,0 (s40); [sb] (8.5,0) to node [action,below] 0,0 (s50); [sb,loop left] (s10) to node [action,left] 0,0 (s10); Here we clearly have ℳ,s_0⊩1 p since every path from s_0 will eventually reach the state t_0 where p is true. However, ℳ,s_0⊮_ω1 p since for any value n<ω for the timer, chosen by Eloise, Abelard can choose n for the first action of agent 2 and then it will take n+1 rounds to reach t_0.Because (ℳ)=ℵ_1 (equal to 2^ℵ_0 if we assume the continuum hypothesis), by Corollary <ref> we have ℳ,s_0⊩_ℵ_11 p. However, in this particular model, we also have ℳ,s_0⊩_ω+11 p since Eloise can win the game by first choosing ω for the value of the timer and then lowering its value to n<ω which corresponds the the action which Abelard first chooses for the agent 2. §.§ Regular strategies Here we define a notion of a regular strategy which will be important for the proofs later in this paper. We only define this concept for Eloise only for the transition games in which Eloise is the verifier. This suffices for our needs, but the definition—and the related Lemma <ref>—could easily be generalized for both players and all kinds of transition games. A strategy τ for Eloise in a transition game g( E,q,AΦ) is regular, if the following properties hold: (i) τ instructs Eloise to make all the claims which are valid (by the respective ). Moreover, τ instructs Eloise to challenge all the claims which Abelard makes. (Note that this latter condition is safe for Eloise since she is given the chance to make every claim first and thus, by the first condition, Abelard can only make claims which are false.) (ii) τ instructs Eloise to try to end the game (by ending her seeker turn or by not taking a new seeker turn) always when the truth function T has winning values for Eloise—that is, she would a have a winning strategy from the exit location if Abelard did not want to continue as a seeker.(iii) Actions chosen by τ (for the agents in A) are independent of the current seekerand seeker turn counter n∈ℕ in configurations.Note that the conditions (i)-(iii) together imply that all the actions chosen by a regular strategy are independent of the current seekerand seeker turn counter n∈ℕ in configurations. Hence, the actions chosen by a regular strategy depend only[The parameter x and all the other information that is should be encoded in the configurations (see Remark <ref>) are only used for describing the current sub-phase of the game. Hence, it is easy to see players' strategies cannot depend on these parameters.] on the pairs (q,T), where q is the current state and T is the current truth function. Also note that since, by (i), Eloise makes all the valid verifications and falsifications, the truth function T is always determined by the path that has been formed by the transition game.The following lemma shows that from now on we may assume all winning strategies to be regular. Since regular strategies depend only on the states and the truth function, the additional parametersand n cannot be used for “signalling” any information for τ. If Eloise has a winning strategy in a transition game g( E,q,AΦ), then she has a regular winning strategy in that game. Suppose that Eloise has winning strategy τin g( E,q,AΦ). We first note that, for checking the regularity conditions (i)–(iii), it suffices the we only consider the configurations that can be reached with the strategy of Eloise. This is because we can choose arbitrary actions for all the other configurations in order to satisfy the regularity conditions. We make the strategy τ regular by doing the following modifications (in the given order).* If τ does not satisfy the regularity propety (i), then we simply first modify it so that Eloise makes all the claims which are true by ; it is clear that we end up in Eloise's winning exit location if Abelard challenges these new claims. Moreover, we then redefine τ to challenge all the claims made by Abelard; since all of these claims must now by false by , it follows from the determinacy of evaluation games that every challenge by Eloise leads into an exit location which is winning for her.After these modifications, τ is still a winning strategy and it now satisfies the regularity property (i). Let c=(,q,T,i) be configuration that can be reached with τ and in which τ does not instruct Eloise to challenge some claim φ that Abelard can make (if Abelard claims that some formula ψ is false, then here φ=ψ). If ( A,q,φ) is a winning location for Eloise, then we can redefine τ such that it instructs Eloise to challenge the claim φ. Suppose then that ( A,q,φ) is not a winning location for Eloise, whence by determinacy, ( E,q,φ) is a winning location for Eloise. We then redefine τ in such a way that Eloise makes the claim φ by herself. If Abelard then challenges this claim, the exit position will be winning location for Eloise. We do this modification for all configurations for which τ violates the regularity property (i).* Let c=(,q,T,n,ii) be a configuration that can be reached with τ sothat ( E,q,Φ,T) is a winning location for Eloise, but τ does not instruct Eloise to try to end the transition game at c. We then redefine τ to instruct Eloise to try to end the game at c. If Abelard also wants to end the game, then we reach a winning exit location for Eloise. If Abelard does not want to end the game, then the game continues from a configuration c' that must be winning for Eloise. We can then modify τ in such way that it is a winning strategy from c'. Moreover, we can do this while maintaining the regularity conditions (i) and (ii)—we simply do the same modifications as above for all new configurations that violate these regularity conditions.After doing the the procedure above for all configurations for which τ violates the regularity property (ii), τ satisfies the properties (i) and (ii). * In order to satisfy the regularity condition (iii), will first modify τ in various ways and then show that the modified strategy satisfies the condition (iii). Supposing that τ already satisfied the conditions (i) and (ii), it will then be regular. Suppose first that c=( A,q,T,n,iii) is a winning configuration for Eloise, but T is not winning for Eloise (in the boolean game that potentially follows). Let c'=( E,q,T,n-1,iii). Since Abelard could have ended his Seeker turn at ( A,q,T,n,ii), it now follows that c' must be a winning configuration for Eloise. We then modify τ in such way that it makes the same choice at c' and c (we can do that while maintaining the regularity conditions (i) and (ii) by doing the modifications above—if necessary). We do these modifications for all configurations c of this type.We then do the following procedure for every integer n≤|𝐴𝑡(Φ)|, beginning from n=|𝐴𝑡(Φ)|. Let c_n=(,q,T,n,iii) be a configuration that can be reached with τ. Let n'≤|𝐴𝑡(Φ)| be the largest integer such that c_n'=(,q,T,n',iii) can be reached with τ. We redefine τ at c_n in such a way that it selects the same actions as at c_n'. We continue this modification in such a way that, when playing from c_n, we can only reach configurations of the same form as those that can be reached from c_n', the only difference being the value of seeker alternation counter. Now all the exit locations that can be reached by using τ from c_n must be winning for Eloise. Since the truth function can be updated at most |𝐴𝑡(Φ)| many times and, by condition (ii), T gets updated after every seeker alternation, it is impossible that Eloise would now lose the game because the seeker turn counter would become zero. Hence τ is still a winning strategy after these modifications.We observe that by doing the procedure above for every n≤|𝐴𝑡(Φ)| (starting from the highest values) and for every configuration c_n, we finally obtain a winning strategy that is completely independent of the seeker turn counter. Also note that, by applying this procedure, we also maintain the regularity conditions (i) and (ii) for τ.To prove that the actions chosen by τ for A are now independent of both the seekerand the seeker turn counter n, suppose for the sake of contradiction that τ assigns different actionsfor A in configurations c=(,q,T,n,iii) and c'=(',q,T,n',iii) such that c≠ c' and both c and c' can be reached with τ. Since τ is independent of the seeker turn counter, we must have ≠'. By symmetry we may assume that = E and '= A.Suppose first that ( E,q,Φ,T) is a winning position for Eloise. Now, by the condition (ii), τ instructs Eloise to end her seeker turn at ( E,q,T,n,ii), and thus the configuration c cannot be reached with τ.Suppose then that ( E,q,Φ,T) is not a winning position for Eloise. Recall that we have defined τ to make the same choice at c' as at the configuration c”=( E,q,T,n'-1,iii). But this is impossible since τ is independent of the seeker turn counter and that is the only parameter that separates the configurations c and c”. By doing all the modifications above, τ becomes a regular strategy. Since it remains a winning strategy for Eloise even after allthese modifications, Eloise thus has a regular winning strategy in g( E,q,AΦ). Regular strategies will play an important role in the next section where we prove the equivalence of and the standard compositional semantics for . This is because regular strategy of Eloise in a transition game for AΦ can be used in a straightforward way for formulating a collective strategy S_A for the coalition A (and vice versa).§ GTS VS COMPOSITIONAL SEMANTICS FOR ATL^+In this section we show that our game-theoretic semantics is equivalent to the standard (perfect-recall) compositional semantics of . From the results of the previous section it follows that this equivalence holds for both unbounded and bounded with a stable timer bound.We begin with some preliminary definitions. We first define a so-called finite path semantics, to be used later. See <cit.> for a similar definition.We define the length (λ) of a finite path λ as the number of transitions in λ (whence the last state of λ is λ[(λ)]).If λ is a prefix sequence of λ', we write λ≼λ'.Let ℳ be aand λ∈_fin(ℳ). Truth of an ^+ path formula Φ on the finite path λ is defined as expected, the non-obvious clauses being as follows:* ℳ,λφ iff(λ)≥ 1 and ℳ,λ[1]φ. * ℳ,λφψ iff there exists some i≤(λ) such that ℳ,λ[i]ψ and ℳ,λ[j]φfor all j < i.Let ℳ be aand λ∈_fin(ℳ). Truth of a path formula Φ of ^+ on λ is defined as follows: * ℳ,λφ iff ℳ,λ[0]φ (where φ is astate formula).* ℳ,λφ iff(λ)≥ 1 and ℳ,λ[1]φ. * ℳ,λΦ iff ℳ,λΦ. * ℳ,λΦ∨Ψ iff ℳ,λΦ or ℳ,λΨ. * ℳ,λφψ iff there exists some i≤(λ) such that ℳ,λ[i]ψ and ℳ,λ[j]φfor all j < i. Let ℳ be a , Λ∈(ℳ) and Φ a path formula of .An index i≥ 1 is a truth-swap point of Φ on Λ if either of the following holds: * ℳ,Λ[i-1,∞)Φ and ℳ,Λ[i,∞)Φ.* ℳ,Λ[i-1,∞)Φ and ℳ,Λ[i,∞)Φ.(Above the notation Λ[i,∞) denotes the infinite path (Λ[i],Λ[i+1],…).)We define the truth-swap number of Φ on Λ to be𝑇𝑆𝑁(Φ,Λ) := ({i| iis a truth-swap point of Φ on Λ}). The claims of the following lemma are easy to prove. Similar observations have been made in <cit.>.Let ℳ be a , Λ∈(ℳ) and Φ a path formula of . Now, the following claims hold: * 𝑇𝑆𝑁(Φ,Λ)≤ |{Ψ∈𝐴𝑡(Φ)| Ψ is a temporal subformula}|.* ℳ,ΛΦ iff there is some k∈ℕ s.t. ℳ,λΦ for every finite λ≼Λ for which (λ)≥ k.The unbounded is equivalent to the standard (perfect-recall) compositional semantics of . We prove by induction onstate formulae φ that for anyCGM ℳ and a state q in ℳ: ℳ,qφiffEloise has a winning strategy in 𝒢(ℳ,q,φ).If φ is a proposition symbol, then the claim holds trivially.Let φ=ψ and suppose first that ℳ,qψ, i.e. ℳ,qψ. By the inductive hypothesis Eloise does not have a winning strategy in 𝒢(ℳ,q,ψ). Since evaluation games are determined, Abelard has a winning strategy in 𝒢(ℳ,q,ψ). Thus, Eloise has a winning strategy in 𝒢(ℳ,q,ψ).Suppose then that Eloise has a winning strategy in the evaluation game 𝒢(ℳ,q,ψ). Then Eloise cannot have a winning strategy in 𝒢(ℳ,q,ψ). Hence, by the inductive hypothesis, ℳ,qψ, i.e. ℳ,qψ.Let φ=ψ∨θ and suppose that ℳ,qψ∨θ, i.e. ℳ,qψ or ℳ,qθ. Suppose first that ℳ,qψ, whence by the inductive hypothesis Eloise has a winning strategy in 𝒢(ℳ,q,ψ). Now Eloise can win 𝒢(ℳ,q,ψ∨θ) by choosing ψ on the first move. The case when ℳ,qθ is analoguos. Suppose now that Eloise has a winning strategy in the evaluation game 𝒢(ℳ,q,ψ∨θ). Let χ∈{ψ,θ} be disjunct that Eloise chooses when following her winning strategy. Now Eloise must have a winning strategy in 𝒢(ℳ,q,χ) and thus by the inductive hypothesis ℳ,qχ. Therefore ℳ,qψ∨θ.Finally, let φ=AΦ. It suffices to show that Eloise has winning strategy in the (unbounded) transition game g( E,q,AΦ) if and only if the coalition A has a (perfect recall) strategy S_A such that ℳ,ΛΦ for every Λ∈(q,S_A). The cases (a) and (b) which follow correspond to the two directions of this equivalence.(a) Suppose first that E has a winning strategy τ in the transition game g( E,q,AΦ). By Lemma <ref> we may assume that τ is regular. Let T_g be the game tree that is formed by all of those configurations that can be encountered with τ. We define S_A by using the actions according to τ for every finite path of states that occurs in consecutive configurations in T_g. The actions for all other finite paths are irrelevant.In order to show that S_A is well-defined this way, let λ,λ' be finite branches of configurations in T_g such that the states occurring in configurations of λ and λ' are in the same order. Let c=(,q,T,n,iii) and c'=(',q,T',n',iii) be the last configurations in λ and λ', respectively. It suffices to show that τ assigns the same actions for A in both c and c'. Since λ and λ' have visited the same states, by regularity condition (i), we must have T=T'. Therefore, by regularity condition (iii), τ assigns the same actions for c and c'.Let Λ∈(q,S_A), whence states in Λ occur in some infinite tuple of configurations in T_g.In the (infinite) play of g( E,q,AΦ), that corresponds to Λ, Eloise does only finitely many verifications and cannot stay as a seeker for infinitely many rounds (since τ is a winning strategy).Let k∈ℕ be such that Eloise neither does any further verifications nor becomes a seeker after the state Λ[k]. Let λ_0≼Λ be a finite path such that|λ_0|≥ k. We can show by induction on the formulae in _𝐴𝑡(Φ) that if a position of the form (,λ_0[l],Ψ,T), where Ψ∈_𝐴𝑡(Φ), can be reached by using τ, then the following holds:ℳ,λ_0Ψiff=E. * The cases Ψ=φ and Ψ=φ are easy to prove. * Let Ψ=ψθ and suppose first that = E. Since τ is a regular winning strategy, there must be i≤ k s.t. Eloise verifies ψθ at λ_0[i]. If Abelard challenged Eloise's claim, the evaluation game would have continued from the position ( E,λ_0[i],θ,T). By the (outer) inductive hypothesis we have ℳ,Λ[i]θ. Let then j<i. Now Abelard could have attempted to falsify ψ at Λ[j], whence Eloise must have challenged since τ is a regular winning strategy. Then the evaluation game would have continued from the position ( E,Λ[j],ψ,T) and thus by the (outer) inductive hypothesis ℳ,Λ[j]ψ. Thus we have shown that ℳ,λ_0ψθ.Suppose now that = A. We also suppose, for the sake of contradiction, that ℳ,λ_0ψθ. Now there is i≤ k such thatℳ,λ_0θ. If Abelard would have verified θ at λ_0[i], then Eloise would have lost by the (outer) inductive hypothesis. Hence Eloise should have falsified ψθ at some state λ_0[j], where j<i. But then by the (outer) inductive hypothesis we must have ℳ,λ_0[j]ψ, which is a contradiction.* Suppose that Ψ=Θ. The next position of the evaluation game is (,λ[l],Θ,T) and thus by the (inner) inductive hypothesis, ℳ,λ_0Θ iff = A. Hence,we have ℳ,λ_0Θ iff = E * The case Ψ=Θ_1∨Θ_2 is proven similarly to the previous case. Abelard is the seeker at the last state λ_0[m] of λ_0 and may attempt to end the transition game at λ_0[m]. By our assumption Eloise does not become a seeker and thus the evaluation game is continued from ( E,λ_0[m],Φ,T) for some T. By the induction proof above, we must have ℳ,λ_0Φ. Hence, by Lemma <ref> we have ℳ,ΛΦ.(b) Suppose then that there is a joint (perfect recall) strategy S_A such thatℳ,ΛΦ for every Λ∈(q,S_A). We define a perfect recall strategy τ for Eloise as follows. Suppose that game is at some configuration c that is reached with a finite path λ_0 such that q_0 is the last state of λ_0. * If ℳ,q_0θ for some ψθ∈ At(Φ), then Eloise claims that θ is true.* If ℳ,q_0ψ for some ψθ∈ At(Φ), then Eloise claims that ψ is false.* Suppose that q_0=Λ[0] and ψ∈𝐴𝑡(Φ) is a state formula. If ℳ,q_0ψ, then Eloise claims that ψ is true.* Suppose that q_0=Λ[1] and ψ∈𝐴𝑡(Φ). If ℳ,q_0ψ, then Eloise claims that ψ is true.* If Abelard makes any claim on the truth of formulae, Eloise always challenges those claims. (Note here that Abelard's claim must be false—according to the compositional truth condition—otherwise Eloise would already have made the same claim by herself.)* If Eloise is the seeker in c and ℳ,λ_0Φ, then Eloise decides to end her seeker turn. * If Abelard ends the seeking at c and ℳ,λ_0Φ, then Eloise decides to become seeker. Otherwise, Eloise ends the transition game at c.* If Eloise needs to choose actions for agents in coalition A at c, she chooses them according to S_A(λ_0). We show by (co)-induction on the configurations of the transition game g( E,q,AΦ), that when Eloise uses τ she cannot end up in a losing ending position. * Let c=( E,,q',T,n,i). Since the verifications and challenges are made according to the compositional semantics on the current state, Eloise has a winning strategy from any possible exit position by the (outer) inductive hypothesis. * Let c=( E,,q',T,n,ii). By Lemma <ref> and the definition of τ, the transition game can only end when ℳ,λ_0Φ. Hence from the exit position ( E,q',Φ,T), Eloise can play in such a way that for any position (,q',Ψ,T), that is reached, the following condition holds:ℳ,λ_0Ψiff=E,where Ψ is a subformula of Φ such that there is φ∈𝐴𝑡(Φ) which is a subformula of Ψ. Eventually, a location of the form (,q',φ,T) is reached, where φ∈𝐴𝑡(Φ). Since the verifications by τ are made according to the compositional truth of the relational atoms of Φ, it is quite obvious to see that (,q',φ,T) is a winning position for Eloise. * Let c=( E,,q',T,n,iii). This configuration does not lead to any exit locations. Since Eloise chooses actions for agents in A according to S_A, every path of states that is formed with τ is a prefix sequence of some path Λ∈(q,S_A). Since ℳ,ΛΦ for every Λ∈(q,S_A), by Lemma <ref>, and the definition of τ, Eloise cannot stay as a seeker forever when playing with τ. If Abelard stays as a seeker forever, then Eloise wins. Hence, τ is a (perfect recall) winning strategy for Eloise.Since unbounded transition games are positionally determined, there is also a positional winning strategy τ' for Eloise.By combining Theorem <ref> and Corollary <ref>, we immediately obtain the following corollary: If ≥(ℳ), then the -bounded is equivalent on ℳ with the standard (perfect recall) compositional semantics of .§ MODEL CHECKING ATL^+ USING GTS Here we apply the to model checking problems for and its fragments. §.§ Revisiting the PSPACEupper bound proofAs mentioned earlier, the PSPACE upper bound proof for the model checking of ^+ in <cit.> contains a flaw. Indeed, the claim of Theorem 4 in <cit.> is incorrect and acounterexample to it can be extracted from our Example <ref>, where ℳ, q_0φ for φ =a_2( p_1 ∨ p_2). In the notation of <cit.>,since |St_ℳ | = 3 and 𝒜𝒫ℱ(φ) = 2,by the claim there must be a 6-witness strategy for the agent 2 for(ℳ, q_0,p_1 ∨ p_2). However, this is not the case, since the player 1 can choose to play at q_0 four times β, and then α. Then ℳ, Λ^6 ( p_1 ∨ p_2) on any resulting path Λ. The reason for the problem indicated above is that compositional semantics easily ignores the role and power of the falsifier (Abelard) in the formula evaluation process. Still, using the introduced above, we will demonstrate in a simple way that the upper bound result is indeed correct. The input to the model checking problem of ^+ is an ^+ formula φ, a finite ℳ anda state q in ℳ.We assume that ℳ isencoded in the standard way (cf. <cit.>) that provides a full explicit description of the transition function o.Unlike <cit.>, we do not assume any bounds on the number of proposition symbols or agents in the input.We only consider here the semantics of ^+ based on perfect information and perfect-recall strategies.The ^+ model checking problem is PSPACE-complete.We get the lower bound directly from <cit.>, so we only prove the upper bound here. By Theorem <ref> and Proposition <ref>, if ℳ is a finite , we have ℳ,qφ iffEloise has a positional winning strategy in 𝒢(ℳ,q,φ,N) with N = ||·|φ|.It is routine to construct an alternating Turing machine TM thatsimulates 𝒢(ℳ,q,φ,N) such that the positions for Eloise correspond to existential states of TM and Abelard's positions to universal states.Due to the timer bound N, the machine runs in polynomial time. It is clear that if Eloise has a (positional or not) winning strategy in the evaluation game, then TM accepts. Conversely, if TM accepts, we can read a non-positional winning strategy for Eloise from the the computation tree (with only one successful move for existential states recorded everywhere) which demonstratesthat TM accepts. By Proposition<ref>, Eloise thus also has a positional winning strategy in the evaluation game. Is the pruning here necessary? Can we shorten this part? We use s to generate a (finite) pruned game tree T_s such that each position where Eloise makes a choice has a single successor position provided by s, and each position where Abelard makes a choice has as successors all possible successor states. If some position l of T_s occursmore than once on a branch of T_s,then each subtree whose root is the closest to the root instance of l on the branch is replaced by a subtree whose root is a last instance of l on the branch.This process is repeated until the resulting tree encodes a deterministic positional winning strategy for Eloise.Since APTIME = PSPACE, the claim follows.§.§ A hierarchy of tractable fragments ofWe now identify a natural hierarchy of tractable fragments of ^+. Let k be a positive integer. Define ^kto be the fragment ofwhere all formulae AΦ have the property that|𝐴𝑡(Φ)|≤ k.Note that ^1 is essentially the same as (with Release). Note also that the number of non-equivalent formulae of ^k is not bounded for any k, even in the special case where the number of propositions and actions is constant, because nesting of strategic operators A is not limited.Still, we will show that the model checking problem for ^k is PTIME-complete for any fixed k.Again s are encoded explicitly and no restrictions on the number of propositions or actions is assumed.(In fact, a certain implicit encoding of s leads toΔ^P_3-completeness <cit.>.) With the fully developed in place, the following theorem is now actually straightforward to prove. This demonstrates the potential advantages of . We first observe that when φ∈^k and ℳ is finite, the number of all positions in a an unbounded evaluation game (ℳ,q,φ is polynomial with respect to the size of ℳ and φ (check Remark <ref> for all the information that should be encoded in a position).The same holds for bounded evaluation games. Note that generally for φ'∈, the number positions in an evaluation game could be exponential with respect to φ', since the total number of truth functions T grows exponentially with the number of relative atoms. For any fixed k∈, the model checking problem for^kis PTIME-complete. The claim is well-known for(see <cit.>), so we have the lower bound for free, for any k. One possible proof strategy for the upper bound would involve using alternating LOGSPACE-machines, but here we argue via Büchi-games instead. See the details of the reduction of unbounded evaluation games to Büchi-games in the technical report <cit.>(the proof for Proposition <ref>). Consider a triple (ℳ,q,φ), where φ∈^k. By the proof ofProposition <ref>,there exists a Büchi game BG such that Eloise wins the unbounded evaluation game 𝒢(ℳ,q,φ) iff she wins BG from the state of BG that corresponds to the beginning position of the evaluation game. We then observe that since we are considering ^kfor a fixed k, the domain size of each truth function T used in the evaluation game is at most k, and thus the number of positions in 𝒢(ℳ,q,φ) is polynomial in thesize of the input (ℳ,q,φ).(Cf. Remark <ref> for all the information that should be encoded in a position in bounded evaluation games; here we only use the simpler unbounded games.) Thus also the size of BG is polynomial in the input size.We note that,in order to avoid blow-ups, it is essential that the maximum domain size k of truth functions T is fixed.We also note—as mentioned already in <cit.>—that the number of transitions in ℳ is not bounded by the square of the number of states of ℳ. In fact, because we impose no limit (other than finiteness) on thenumber of actions in ℳ, the number of transitions in relation to states is arbitrary. However, this is no problem to us since an explicitencoding of ℳ—which lists all transitions explicitly—is part of the input to the model checking problem.Since Büchi games can be solved in PTIME, the claim follows. Note that the above proof provides more than just a reduction of unbounded evaluation games on finite models to Büchi games. It shows that such evaluation games essentially are Büchi games.§ BOUNDED MEMORY SEMANTICS FOR ^KStrategies with bounded memory in concurrent game models can be naturally defined using deterministic finite state transducers (or, Mealy machines). For a transducer-based definition of bounded memory strategies, see e.g. <cit.>, and see <cit.> for more on this topic. Using such strategies, an agent's moves are determined both by the current state in the model and by the current state (memory cell) of the agent's transducer. Then,transitions take place both in the model and in the state space of the transducer, thus updating the agent's memory.So, such strategies are positional with respect to the product of the two state spaces. In the compositional m-bounded memory semantics (^m) for ^+, agents are allowed to use at most m memory cells, i.e., strategies defined by transducers with at mostm states. §.§ An upper bound for the number of memory cells Since the use of the truth function T in our is analogous to the use of memory cells in m-bounded memory semantics, we obtain the following result.For ^k, the unbounded is equivalent to the m-bounded memory semantics for m = 3^k-2^k.Let m:=3^k-2^k and φ∈^k. We show that ℳ,q⊩φiffℳ,q^mφ. The implication from right to left is immediate by Theorem <ref>. We prove the other direction by induction on φ. The only interesting case is when φ=AΦ. Suppose that Eloise has a winning strategy in g( E,q,AΦ). By Lemma <ref> we may assume that τ is regular.We define a memory transducer 𝒯 that Eloise can use to define strategies for all agents in A. We fix the set of states C of 𝒯 to be the set of all truth functions T for 𝐴𝑡(Φ) such that T(χ)= for at least one χ∈𝐴𝑡(Φ). Since T(χ)∈{,⊤,}, we have |C|≤ 3^k-2^k=m. The initial state of 𝒯 is T_0 where T_0(χ)= for every χ∈𝐴𝑡(Φ). The transitions in 𝒯are defined according to how Eloise updates the truth function T during the transition game. However, when T becomes fully updated (i.e. T(χ)≠ for every χ∈𝐴𝑡(Φ)), then no further transitions are made, because in this case all relative atoms have been verified/falsified and the truth of Φ on the path is fixed.Now, the strategy for each agent a∈ A is defined positionally on C× as follows: At a state T of 𝒯 and state q ∈ℳ, the agent a follows the action prescribed by Eloise's winning strategy for the corresponding step phase in the transition game. The strategy for A is now well-defined since τ is regular and thus depends only on the current state and the current truth function.It is now easy to show that ℳ,Λ^mΦ for any path Λ that is consistent with the resulting collective strategy for the coalition A.By Theorem <ref>, we obtain the following corollary.For ^k, the perfect recall compositional semantics is equivalent to the (3^k-2^k)-bounded memory semantics. This extends the known fact that positional strategies(using 1 memory cell)suffice for the semantics of (which is essentially the same as ^1). Moreover, given a formula, there is no need for the full perfect recall semantics, as we may equivalently apply the bounded memory semantics with a bound that is based on the structure of the formula (“the maximum temporal width”). By _^k we denote the fragment of ^k where all the relative atoms are of the form φ, that is, the “temporal objectives” Φ are boolean combinations of reachability objectives. For _^k, the unbounded is equivalent to the m-bounded memory semantics for m = 2^k-1.In _^k we may modify the rules of the transition games in such a way that relative atoms cannot be falsified by the players (but naturally they can be verified). This is because ψ is interpreted as ⊤ψ and ⊤ is never false: if a player tried to falsify ⊤ψ, that player would immediately lose once the other player challenges the claim.With this modification of the rules, there are at most 2^k different truth functions that may appear in the transition games for _^k. Moreover, there is only a single truth function that is fully updated. Hence we may define a memory transducer 𝒯 with 2^k-1 states as in the proof of Theorem <ref> and prove the rest of the claim analogously. In the next subsection we will show that the result of Theorem <ref> is optimal in the sense that no smaller number of memory cells guarantees an equivalent semantics. Hence, even for _^k, the agents may need exponentially many memory cells with respect to the number of relative atoms.§.§ A lower bound for the number of memory cells In this section we will investigate the following simple _^k-formula:ξ_k:=a_1Φ_k,where Φ_k :=p_1∧⋯∧ p_k.Note that Φ_k is just a conjunction of reachability goals that agent a_1 needs to fulfill (in any order). Since positional strategies suffice for single reachability objectives, it would be intuitive to think that a_1 needs at most k-1 memory cells in order to achieve Φ_k. This is because a_1 needs to change its positional strategy only when completing some of the reachability objectives.[This can be seen by analyzing our for : note that (1) the strategies in transition games may be assumed to be positional with respect to the truth function; and (2) the truth function for Φ_k can be updated at most most k times during the transition game for Φ_k.] However, we will see that the bounded memory strategy of a_1 must potentially use a transducer that has exponentially many states with respect to k. The model that we will use for proving this claim is constructed in the following example. Let [k]:={1,…, k} and ℳ_k := (, , , , d, o, v) be a , where * ={a_1,a_2},={p_1,…,p_k}; * =[k] ∪ {B| B⊆𝒫([k])∖{∅}}∪{}; * ={q_0}∪{q_i| i∈[k]}∪{q_B| B∈𝒫([k])∖{∅,[k]}}; * v(p_i)={q_i}∪{q_B∈| i∈ B} for all p_i∈; * d(q_0,a_1)={B| B∈𝒫([k])∖{∅}}, d(q_0,a_2)=[k] and d(q,a_i)={} when q∈∖{q_0} and i∈{1,2}; * o(q_0,(B,i))=q_i ifi∈ B,q_B else; o(q_i,(,))=q_0 when i∈[k] and o(q_B,(,))=q_B when B∈𝒫([k])∖{∅,[k]}. See the following figure for model ℳ_k in the special case when k=3.[scale=0.9, state/.style=draw,circle,inner sep=2pt,minimum width=8mm,font=, action/.style=-latex,font=] at (-4.2,3.3) ℳ_3:; at (0,0) [state] (0) p_4; at (0,3) [state] (1r) p_1; at (2.9,-1.5) [state] (2r) p_2; at (-2.9,-1.5) [state] (3r) p_3; at (2.2,2.2) [state] (1) p_1; at (1,-2.8) [state] (2) p_2; at (-3.2,0.8) [state] (3) p_3; at (3.2,0.8) [state] (12) p_1,p_2; at (-2.2,2.2) [state] (13) p_1,p_3; at (-1,-2.8) [state] (23) p_2,p_3; [above=0pt of 0] q_0; [above=0pt of 1r] q_1; [right=0pt of 2r] q_2; [left=0pt of 3r] q_3; [right=0pt of 1] q_{1}; [right=0pt of 2] q_{2}; [below=0pt of 3] q_{3}; [below=0pt of 12] q_{1,2}; [left=0pt of 13] q_{1,3}; [left=0pt of 23] q_{2,3}; [action, right, bend right, near end, text width=5mm] (0) to node {1,2,3},1 {1,2},1 {1,3},1 {1},1 (1r); [action, left, bend right, near start, text width=3mm] (1r) to node ,(0); [action, sloped, bend right, text width=7mm] (0) to node {1,2,3},2 {1,2},2 {2,3},2 {2},2 9000 9000 (2r); [action, right, bend right, near start] (2r) to node , (0); [action, sloped, bend right, text width=11mm] (0) to node 90000 90000 {3},3 {1,3},3 {2,3},3 {1,2,3},3 (3r); [action, below, bend right, near start] (3r) to node , (0); [action, below, near end, text width=1mm] (0) to node {1},2 {1},3 (1); [action, left, near end, text width=7mm] (0) to node {2},1 {2},3 (2); [action, above, near end, text width=7mm] (0) to node {3},1 {3},2 (3); [action, below, sloped, near end] (0) to node {1,2},3 (12); [action, above, sloped, near end] (0) to node {2,3},1 (23); [action, above, sloped, near end] (0) to node {1,3},2 (13); [action, loop above] (1) to node , (1); [action, loop below] (2) to node , (2); [action, loop left, text width=3mm] (3) to node ,(3); [action, loop right, text width=3mm] (12) to node ,(12); [action, loop above] (13) to node , (13); [action, loop below] (23) to node , (23); The model ℳ_k can be described as follows: At q_0 the agent a_1 gets to “announce” any nonempty set B of (indices of) proposition symbols in . Then, depending on the action chosen by the agent a_2, one of the following happens:* Some proposition symbol p_i, for which i∈ B, is reached and then the game returns to q_0. This happens when a_2 chooses i∈ B, whence a transition is made to q_i and then back to q_0.* All proposition symbols p_i with i∈ B are reached, but thereafter no new proposition symbols can be reached. This happens when a_2 chooses some i∉ B, whence a transition is made to q_B, where the game will loop forever. We will show that agent a_1 has a (2^k-1)-bounded memory strategy σ_a_1 which guarantees the truth of Φ_k on every path in (q_0,σ_a_1). We first define a finite state transducer 𝒯_k as follows:* The set of states C of 𝒯_k is {c_B| B∈𝒫([k])∖{∅}}. Now |C|=2^k-1. * The initial state of 𝒯_k is c_[k]. * The transitions of 𝒯_k are define as follows: Suppose that the current state of 𝒯_k is c_B for some B∈𝒫([k])∖{∅} and a state q_j is reached for some j∈[k]. Now if j∈ B and B≠{j}, then 𝒯_k changes its state to c_B∖{i}. Else, no transition is made. See the following picture for the transducer 𝒯_k in the special case when k=3. [scale=1, state/.style=draw,circle,minimum width=7.7mm, action/.style=-latex,font=] at (-3.2,0.5) 𝒯_3:; at (0,0) [state] (123) ; at (0,0) [draw,circle,minimum width=6.3mm] ; at (-2,-1.5) [state] (12) ; at (0,-1.5) [state] (13) ; at (2,-1.5) [state] (23) ; at (-2,-3) [state] (1) ; at (0,-3) [state] (2) ; at (2,-3) [state] (3) ; [above=0pt of 0] c_{1,2,3}; [left=0pt of 12] c_{1,2}; [right=0pt of 13] c_{1,3}; [right=0pt of 23] c_{2,3}; [left=0pt of 1] c_{1}; [right=0pt of 2] c_{2}; [right=0pt of 3] c_{3}; [action, left] (0) to node q_3 (12); [action, left] (0) to node q_2 (13); [action, right] (0) to node q_1 (23); [action, left] (12) to node q_2 (1); [action, left, near start] (12) to node q_1 (2); [action, right, near start] (13) to node q_3 (1); [action, left, near start] (13) to node q_1 (3); [action, right] (23) to node q_2 (3); [action, right, near start] (23) to node q_3 (2); Intuitively, the set B, when it is the index of c_B, denotes the set of indices of those proposition symbols p_i that have not yet been reached.We then define the strategy σ_a_1 simply to select the action B at q_0 when the current state of 𝒯_k is c_B. (The actionis selected elsewhere.) It is easy to see that σ_a_1 is a strategy that satisfies Φ_k on every path.Note that by using 𝒯_k, the agent a_1 essentially remembers which subset of {p_1,…,p_k} of proposition symbols have already been reached. But a_1 does not have to remember in which order these states have been visited; if the order was remembered as well, then the number of states in 𝒯_k would be the number of k-permutations plus the initialstate, resulting in k!+1 states.We prove the following lemma for the model ℳ_k constructed in Example <ref>. ℳ_k^m ξ_k when m<2^k-1.Let σ_a_1 be a strategy for a_1 using a transducer 𝒯 with less than 2^k-1 states. We will show that there is a path in (q_0,σ_a_1) on which p_i is not reached for some i∈[k].We first make the following two observations (i) and (ii): (i) Suppose a_1 chooses some B∈𝒫([k])∖{∅} at q_0 for which i∉ B for some p_i that has not yet been reached. Now the next state may be q_B where it will loop forever. Since q_B∉ v(p_i), the proposition p_i will never be reached. (ii) Suppose now that a_1 chooses some B at q_0 for which i∈ B for some p_i that has already been reached. Now the next state may be q_i and thereafter the game returns to q_0. Since p_i is the only proposition symbol that is true at q_i, these transitions did not reach any new proposition symbols.By the points above, we see that in order to reach all p_i, the agent a_1 has to choose such a set B at q_0 which has the indexes of exactly those proposition symbols which have not yet been reached. We denote this behavior of a_1 by (⋆).Since 𝒯 has less than 2^k-1 states, and |𝒫([k])∖{∅}|=2^k-1, there must be B'∈𝒫([k])∖{∅} which a_1 never chooses at q_0 when following σ_a_1. Supposing that a_1 plays according to (⋆), it may happen that exactly those p_i for which i∈ [k]∖ B are reached (by visiting the corresponding states q_i (i∈ [k]∖ B) and returning to q_0 after every visit). But, in this situation it is no longer possible for a_1 to follow (⋆) and thus impossible to reach all p_i for which i∈ B.By Example <ref> and Lemma <ref> we immediately obtain the following corollary. The perfect recall semantics for _^kis not equivalent to m-bounded memory semantics for any m<2^k-1. By this result, agents may need an exponential number of memory cells with respect to the number of relative atoms (in the Boolean combination). Again, this result holds even in the simple case where Φ is just a conjunction of reachability objectives p.Corollary <ref> also implies that the result of Theorem <ref> is optimal. We leave it open whether the result of Theorem <ref> could be improved.§.§ Some remarks on the amount of memory needed for a strategy There are several ways in which memory resources play a role in strategies. Besides the read-only memory needed to encode a strategy, for the execution of that strategy one can distinguish the amounts of memory needed: (i) to store any possible input of the strategy,(ii) to compute the value of the strategy function on any given input,(iii) to execute the strategy in any single play.Generally, these can be very different. Usually, the first one is taken as the measure of the memory consumption of a strategy in terms of the required input size (i.e., memoryless, bounded memory, unbounded/perfect recall), while the second is usually disregarded and strategies are assumed to be computed by – or even hardwired in – some external devices (“black boxes”). As for the third measure, which involves both the previous two, we are not aware of any explicit consideration of it in the literature. We will make some brief comparing remarks for the case of bounded memory strategies considered here.From Corollary <ref> we see that agents may need a strategy transducer with 2^k-1 memory cells when there are k reachability objectives. This is because a strategy is a global plan of action—or a look-up table—that must take into account all possible plays.However, by observing the use of truth function in transition games, we see that in every single play of the game only k-1 memory cells need to be used. That is, the finite state transducer needs to visit only k-1 states on every path (c.f. Example <ref> and the transducer 𝒯_k). Thus, the state space of the transducer has to be exponential with respect to the number of reachability objectives, but only a linearly large section of the transducer is actually used in every single play.In fact, the latter is to be expected, in the light of the PTIME complexity of model checking of ^k, by Theorem <ref>. This observation suggests that the amount of RAM-type of memory needed to use during the play may be a reasonable measure, alternative to thenumber of states in the transducer encoding the agent's strategy in enforcing or refuting a formula(and for other related logics). Thus, one could argue that agents actually only need to use linear amount of memory in ^k, supposing they can manage their memory in a more dynamical (“on-the-fly”) way[This is also justified from the `human perspective', as people can manage to do, say, 10 tasks by remembering what is already done (by remembering at most 9 pieces of information) without need for exponential memory capacity (which would be 1023 memory cells by Theorem <ref>).]. For a better bound of the required memory it would be sufficient to modify slightly the transition games and consider a bound k not on the number of all relative atoms in strategic subformulae, but only of the temporal objectives occurring in them.§ CONCLUSIONA natural extension of the present work would be to develop for the full .In conclusion, we note that the game-theoretic semantics fordeveloped here has both conceptual and technical importance, as it explains better how the memory-based strategies in the compositional semantics can be generated and thus also provides better insight on the algorithmic aspect of that semantics.We note that a for , alternative to the one introduced here, could be obtained via for coalgebraic fixed point logic <cit.>.However, such a semantics (being designed for more powerful logics) would not directly lead to our that is custom-made for and would thus not directly enable the complexity analysis that we require. Also, that alternative approach would not give a semantics where the construction offinite paths only suffices.One more thing to consider. By analysing our for , it is quite easy to observe that the model checking can be done with alternating linear space machine (this is actually closely related to the discussions on the linear memory of agents in the last section). Do you think that this would be worth mentioning in the paper? A natural extension of the present work would be to develop for the full . Here the correspondence with Büchi games could be exploited in full.§.§ AcknowledgementsThe work of Valentin Goranko was supported by a research grant 2015-04388 of the Swedish Research Council. The work of Antti Kuusisto was supported by the ERC grant 647289 “CODA." plain
http://arxiv.org/abs/1702.08405v2
{ "authors": [ "Valentin Goranko", "Antti Kuusisto", "Raine Rönnholm" ], "categories": [ "math.LO", "cs.GT", "cs.LO", "F.4.1; I.2.11" ], "primary_category": "math.LO", "published": "20170227180712", "title": "Game-Theoretic Semantics for ATL+ with Applications to Model Checking" }
Lomonosov Moscow State University Skobeltsyn Institute of Nuclear Physics (MSU SINP), 1(2) Leninskie gory, GSP-1, 119991, Moscow, Russia Faculty of Science, Damanhour University, El-Gomhouria St., 22516, Damanhour, El Beheria, EgyptIn this paper we propose a 'knee-like' approximation of the lateral distribution of the Cherenkov light from extensive air showers in the energy range 30-3000 TeV and study a possibility of its practical application in high energy ground-based gamma-ray astronomy experiments (in particular, in TAIGA-HiSCORE). The approximation has a very good accuracy for individual showers and can be easily simplified for practical application in the HiSCORE wide angle timing array in the condition of a limited number of triggered stations. Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application A.Sh.M. Elshoukrofy1,2abeershehatamahmoud@yahoo.com E.B. Postnikov1evgeny.post@gmail.com E.E. Korosteleva1 L.G. Sveshnikova1tfl10@mail.ru H.A. Motaweh2 Received date / Accepted date ===================================================================================================================================================================== § INTRODUCTIONGround-based gamma-ray astronomy has developed very fast since the discovery of the first TeV gamma-ray source, the Crab Nebula, in 1989 <cit.> by the Whipple collaboration. This experiment was based on the imaging air Cherenkov technique, IACT <cit.>. The main idea of this technique is the use of a telescope for collecting the Cherenkov light produced in the atmosphere by very energetic charged particles from extensive air showers (EAS). The competing method of Cherenkov light registration for gamma-ray astronomy is a wide angle timing array, which was first realized in THEMISTOCLE <cit.> and AIROBICC <cit.>. At thepresent time the gamma-ray timing array concept HiSCORE <cit.> (High Sensitivity Cosmic Origin Explorer) is realized in the Tunka valley in Siberia aspart of the TAIGA observatory <cit.>. This is an array of wide-angle non-imaging Cherenkov light detectors spaced about hundred meters apart from each other. The wide angle technique assumes that the air shower front timing is measured by counting the number of Cherenkov photons, Q, emitted by secondary air shower particles and subsequently registered in every i-th station of the array at time, t, at a distance, R, from the shower core Q_i(R, t). The analysis of Q_i(R, t) gives the possibility to reconstruct all the parameters of the primary particle needed for gamma-ray astronomy: shower arrival direction, core position, energy and type of particles <cit.>.The critical point of the reconstruction procedure is the right choice of the Cherenkov light lateral distribution function (LDF) suitable for every kind of approximation. Up to now various approximations were only proposedforeither energiesgreater than 1000 TeV <cit.> (non imaging technique) or up to 10 TeV for the imaging telescopes. In the HiSCORE experiment the method of event reconstruction was primarily developed for theTunka-133 wide angle Cherenkov array <cit.> at energiesabove the PeV-range. A special piecewise fitting function <cit.> (referred below as the ‘Tunka fit’), composed of 4 pieces, was designed to fit both smooth and sharp LDF. The total number of parameters of all constituent functions was reduced to two: a and bxy, the first one is a normalization factor, the second one characterizes the steepness of the whole LDF. The Tunka-133 approximation describes all the LDF for primary cosmic rays in the energy interval 10^15-10^18 eV, for which it was elaborated. Nonetheless, our analysis reveals that the LDF of gamma rays, especially of ones incident at large zenith angles, very often cannot be reproduced. In <cit.> we proposed to use a simple 'knee-like' approximation of the Cherenkov light distribution, and have tested the quality of these approximations for gamma rays and hadrons in the energy region ofinterest, 30-3000 TeV. In this paper we mainly study the possibility of applying these functions toreal experimental conditions of the HiSCORE experiment.§ EVENT PARAMETERS RECONSTRUCTION IN THE HISCORE EXPERIMENT The HiSCORE arraynow includes 28 stations spaced with a step of 106 m and covering an area of0.25 km^2. At present the main steps of event reconstruction are the following <cit.>:* Selection of events with more than 4–5 triggered stations (the stations, where the Cherenkov light pulse exceeds the night sky light background). * Rough estimation of zenith and azimuth angles of arriving particles (θ,ϕ) using the measured time of delay in every station with a plane approximation of the time front.* Reconstruction of the shower core position (X_0, Y_0) by minimizing the difference between the experimental Q_i(R_i) and the ‘Tunka fit’, where R_i is the distance between the i-th station and the core position (X_0, Y_0).* Re-estimation of θ and ϕ with the known core position (X_0, Y_0) using a more appropriate cone-like fitting function for the time front. It gives significant improvement of the directional resolution from dθ∼1^∘–2^∘ to dθ∼0.1^∘–0.4^∘, because the accuracy of the arrival direction reconstruction depends linearly on that of the shower core position. This point is very important for improving the noise/background ratio of a signal because a background flux depends on the observational solid angle as ∼dθ^2. * Approximation of Q_i(R_i) by the 'Tunka fit’ function and estimation of the density of photons at a distance 200 m from the shower core, Q_200, and corresponding energy estimation by the formula:E=aQ_200^0.94+b(a and b are constants obtained from simulations).* Parametric analysis of fitting functions for the estimation of the nature of the primary particle.As is seen, the LDF fitting functions are used on 4 different steps: shower core reconstruction (3), arrival direction estimation (4),energy estimation (5), and primary particle type identification (6).§ KNEE-LIKE APPROXIMATION OF THE LATERAL DISTRIBUTION FUNCTIONFor parameterization of the lateral distribution function, Q(R), of simulated Cherenkov light we used the function that we called a ‘knee-like approximation’, which was used earlier by J. Horandel <cit.> todescribe the knee in the cosmic ray spectrum. It was a function of energy at that time, but we make it a function of the distance, R, from the shower axis. It depends on five parameters C, γ_1, γ_2, R_0, and α: F_appr. = CR^γ _1(1+(R/R_0)^α)^γ_2/α In (<ref>) R is the distance from the shower axis, R_0 is the knee position, γ_1 is the slope of the LDF before the knee, (γ_2+γ_1) is the slope of the LDF after the knee, and parameter α characterizes the sharpness of the knee. We applied that fitting function to simulated data to check the quality of the new approximation and used the HiSCORE bank of simulated events (from CORSIKA <cit.> code), where LDF functions are simulated with a space step of 5 m, for different types of primary particles such as protons, gammas, He, C, Fe in the wide energy interval 30-3000 TeV and zenith angle range 0^∘-50^∘. Below we call them ‘true’ LDF. We showed <cit.> that the new approximation gives the possibility to fit the whole diversity of individual LDF for different nuclei and gamma rays atshower core distances of 20–500 m in the energy interval 30–3000 TeV with a very good accuracy. Parameters of the approximation depend on the energy and the type of primary particle and allow us to separate proton and gamma ray induced showers (details will be published in <cit.>).However, the 5-parameter fitting function may not be suitable for the real conditions of registration, when we select events with a small number of triggered stations (4–5) to decrease the energy threshold, especially for the task with two additional unknown parameters (X-core, Y-core). Our study shows that the two-parameter fitting has a distinctive advantage when we work with a small number of triggered detectors; however, it reduces the diversity of LDF to a fixed set of curves. Therefore, different steps of reconstruction require fitting with different number of parameters.To decrease the number of fitting parameters correlations between parameters were investigated. One of the examples of a strong correlation between γ_1 and γ_2 is presented in Fig. <ref>. A number of other correlations, including very important correlations of γ_1, R_0, and α with a depth of shower maximum, was also established. Using correlations of different parameters we developed four and three parameter versions of the knee-like fit. The 4-parameter version includes the least squares estimation of only 4 fit parameters: C, γ_1, R_0, and α, while the fifth parameter, γ_2, is to be obtained from γ_1 by an equation:γ_2=-0.77γ_1-2The 3-parameter version sets α to the mean value for the given class of events.§ SHOWER CORE RECONSTRUCTION We applied our 3- and 4-parameter fitting function to the step 3 of theHiSCORE method of event parameters reconstruction. In Fig. <ref> we present, for illustration, an individual event from a 100 TeVgamma raydetected by 6 stations. In the top panel the stations location is shown and three versions of core position are depicted: the true one, the one obtained by the center of mass technique, and the one found by our method. In the bottom panel the ‘true’ LDF for this event is plotted together with the 3-parameter fitting function. For this event a simultaneous reconstruction of the core position and the shape of the true LDF looks quite satisfactory.Below we present the statistical analysis of a core position resolution. For that purpose instead of the lateral distribution function,Q(R), we applied the fit to the so called 'amplitude-distance function',A(R) <cit.>, which is the maximal value of theCherenkov pulse rather than the total number of photons in the pulse. This variable is more robust and less disturbed by the noise, so it leads to better results in the core position estimation. Besides, the use of A instead of Q allows us to compare both techniques, the 'Tunka fit' and knee-like approximation, on the same kind of data, because the former is applied to A(R) <cit.>.As we noted in section <ref>, the 2-parameter method was found to give the best accuracy for the small number of hit detectors. Therefore, we calculated the averaged distance between the fitted core and the true core position for every number of hit detectors using this technique. The two parameters to be fitted to A(R) by the least squares method are C and R_0. The other two parameters, γ_1 and γ_2, are obtained as functions of R_0 using the correlations between them, which were derived from the simulation (Fig. <ref>). The values of γ_1 and γ_2 as well as the correlation between them differ from those for Q(R).For the best results, the fit parameter R_0 is looked for only within the limits (constrained minimization of the mean squared error of the fit), and the limits are made dependent on the shower angle, since the rough estimation of the angle is already performed on the previous stage (step 2 ofsection <ref>). These limits are derived from the simulations and shown in Fig. <ref> as well as the values of the knee sharpness parameter, α, which were also found to be slightly dependent on the shower angle. Finally, in Fig. <ref> we plot the mean accuracy ⟨dR⟩ of the core position determined by the 2-parameter version of the knee-like fit, which gives better accuracy for a small number of hit detectors. The figure also contains the results of the `Tunka fit'. The averaging was performed for simulated gamma rays and not for cosmic rays (protons or nuclei), whereas the `Tunka' fit was originally intended for cosmic ray analysis, and that is the reason for the difference in the accuracy of both techniques. § CONCLUSION We proposed and studied a 'knee-like' 5-parameter approximation of lateral distributions of Cherenkov light emitted by extensive air showers. The knee-like approximation is capable of describing the whole diversity of individual lateral distribution functions for different nuclei and gamma rays at the shower core distance 20-500 m in the energy interval (30-3000) TeV withvery good accuracy. Using correlations between different parameters of the knee-like fit we can decrease the number of parameters to 4, 3, or 2. Good quality of these versions of the knee-like fit was tested on simulated data in conditions of the real experimental setup of the TAIGA-HiSCORE project. An algorithm of a shower core reconstruction based on this fit was developed and tested on the same data.This research confirms an ability of the knee-like fit technique to work withreal data and its potential in improving the accuracy of event reconstruction in wide angle non-imaging experiments.The study was supported by the Russian Foundation for Basic Research, project no. 16-29-13035. 991 T.C. Weekes, M.F. Cawley, D.J. Fegan et al., ApJ, 342, 379-395 (1989)2 F. Aharonian, Very High Energy Cosmic Gamma Radiation. A Crucial Window on the Extreme Universe (World Scientific, 2004) 3 P. Baillon, L. Behr, S. Danagoulian et al., Astropart. Phys., 1, 341 (1993)4 A. Karle, M. Merck, R. Plaga et al., Astropart. Phys., 3, 321 (1995)5 M. Tluczykont, D. Hampf, D. Horns, L.A. Kuzmichev et al., Astropart. Phys., 56, 42 (2014) 6 S.F. Berezhnev, N.M. Budnev, M.Buker et al., Bull. Russ. Acad. Sci. Phys., 79 3, 348 (2015) 7 N.M. Budnev, I.I. Astapov, A.G. Bogdanov et al. (TAIGA collaboration), JINST 9, 09021 (2014) 8 D. Hampf, M. Tluczykont, D. Horns, Nucl. Instr. Meth. Phys. Res. A 712, 137 (2013) 9 V.V. Prosin, S.F. Berezhnev, N.M. Budnev et al. (TAIGA collaboration), Nucl. Instr. Meth. Phys. Res. A, 756, 94 (2014) 10 A.A. Al-Rubaiee, Y. Al-Douri, U. Hashim, Journal of Astrophysics, 2014, 492814 (2014) 11 A. Mishev, ISRN High Energy Phys., 2012, 906358 (2012) 12 E. Korosteleva, L. Kuzmichev, V. Prosin et al., Proc. 28th ICRC, 89 (Universal Academy Press, 2003) 13 V.V. Prosin, S.F. Berezhnev, N.M. Budnev et al. (TAIGA collaboration), EPJ Web Conf., 99, 04002 (2015) 14 A.Sh.M. Elshoukrofy, E.B. Postnikov, E.E. Korosteleva, L.G. Sveshnikova, H.A. Motaweh, Bull. Russ. Acad. Sci. Phys., 4 (to be published, 2017) 15 J. Hoerandel, Astropart. Phys. 199, 193 (2003) 16 D. Heck, J. Knapp, J.N. Capdevielle et al., Report FZKA 6019 (Forschungszentrum Karlsruhe, 1998)17 A.Sh.M. Elshoukrofy, E.B. Postnikov, L.G. Sveshnikova, J. Phys.: Conf. Ser. (The 2nd International Conference on Particle Physics and Astrophysics (ICPPA-2016), to be published, 2017)
http://arxiv.org/abs/1702.08390v1
{ "authors": [ "A. Sh. M. Elshoukrofy", "E. B. Postnikov", "E. E. Korosteleva", "L. G. Sveshnikova", "H. A. Motaweh" ], "categories": [ "astro-ph.IM", "astro-ph.HE", "physics.data-an", "85-06" ], "primary_category": "astro-ph.IM", "published": "20170227173511", "title": "Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application" }
The time-averaged Lyapunov exponents, { λ_i}, support a mechanistic description of the chaos generated in and by nonlinear dynamical systems. The exponents are ordered from largest to smallest with the largest one describing the exponential growth rate of the ( small ) distance between two neighboring phase-space trajectories.Two exponents, λ_1 + λ_2, describe the rate for areas defined by three nearby trajectories.λ_1 + λ_2 + λ_3 is the rate for volumes defined by four nearby trajectories, and so on.Lyapunov exponents for Hamiltonian systems are symmetric. The time-reversibility of the motion equations links the growth and decay rates together in pairs.This pairing providesa more detailed explanation than Liouville's for the conservation of phase volume in Hamiltonian mechanics. Although correct for long-time averages, the dependence of trajectories on their past is responsible for the observedlack of detailed pairing for the instantaneous “local” exponents, { λ_i(t)} . The 2017 Ian Snook Prizes will be awarded to the author(s) of an accessible and pedagogical discussion of local Lyapunov instability in small systems. We desire that this discussion build on the two nonlinear models described here, a double pendulum with Hooke's-Law links and a periodic chain of Hooke's-Law particles tethered to their lattice sites.The latter system is the ϕ^4 model popularized by Aoki and Kusnezov.A four-particle version is small enough for comprehensive numerical work and large enough to illustrate ideas of general validity. Instantaneous Pairing of Lyapunov Exponents in Chaotic Hamiltonian Dynamics and the 2017 Ian Snook Prizes ;Short Running-Head Title for CMST :2017 Snook Prizes : How Lyapunov Exponents PairWilliam Graham Hoover and Carol Griswold Hoover Ruby Valley Research InstituteHC 60 Box 601Ruby Valley, NV 89833December 30, 2023 ====================================================================================================================================================================================================§ INTRODUCTION The elucidation of Hamiltonian chaos and Lyapunov instability by Poincaré and Lorenz is familiar textbook material. Models which capture aspects of complexity, the Logistic and Baker Maps, the Lorenz attractor and the Mandelbrot Set, combine visual appeal with mechanistic understanding in the bare minimum of spatial dimensions, two for maps and three for flows. Mechanical models with only three- or four-dimensional phase spaces are simple enough that the entire phase space can be explored exhaustively.“Small Systems” can augment our understanding of nature in terms of numerical models by introducing more complexity. Just a few more degrees of freedom make an ergodic exhaustive sampling impossible.For the small systems we treat here we take on the more difficult task of defining and analyzing the time-dependent convergence of “typical” trajectories. Chaos involves the exponential growth of perturbations.Joseph Ford emphasized the consequence that the number of digits required in the initial conditions is proportional to the time for which an accurate solution is desired. Accordingly a “typical” nonexhaustive trajectory or history is the best that we can do.To go beyond the simplest models to those which elucidate macroscopic phenomena, like phase transitions and the irreversibility described by the Second Law of Thermodynamics, we like Terrell Hill's idea of small-system studies ( in the 1960s he wrote a prescient book, Thermodynamics of Small Systems. ) In what follows we describe two small-system models which are the foci of the Ian Snook Prize Problem for 2017.These models are Hamiltonian, both with four degrees of freedom so that their motions are described in eight-dimensional phase spaces.§.§ The Springy Pendulum and the Springy Double PendulumThe double pendulum with rigid links is an excellent model for the table-top demonstration of chaos.Bill saw one in action at an all-day Stanford lecture given by James Yorke.An even simpler mathematical model for chaos can be obtained with a single pendulum. For chaos the single pendulum needs a spring rather than a rigid link.The single springy pendulum moves in a four-dimensional phase space, just as does the double pendulum with rigid links.Along with Harald Posch<cit.> we investigated mathematical models for chaos based on chains of pendula, both rigid and springy.We studied many-body instabilities by characterizing the form of the detailed description of many-dimensional chaos, the Lyapunov spectrum.We considered two kinds of model Hamiltonians describing chains in a gravitational field : [ 1 ] chains composed of particles with equal masses, as in a physical length of chain; [ 2 ] chains in which only the bottom mass was affected by gravity, as in a light chain supporting a heavy weight. Figure 1 shows five snapshots, equally spaced in time, from a chaotic double-pendulum trajectory.Initially the motionless chain was placed in the horizontal configuration appearing at the top right of Figure 1.If gravity affects only the lower of the two masses ( as in the type-2 models supporting a heavy weight ) thecorresponding Hamiltonian isH = [p_1^2 + p_2^2]/2 + (κ/2)[(r_1-1)^2 + (r_12-1)^2] + y_2.where r_1 and r_12 are the lengths of the upper and lower springs. To enhance the coupling between the springs and gravity we choose the force constant κ = 4 here. §.§ The Spectrum of Time-Averaged Lyapunov Exponents, { λ }The Lyapunov exponents making up the spectrum are conventionally numbered in the descending order of their long-time-averaged values. We begin with the largest, λ_1 . λ_1 describes the long-time-averaged rate at which the distance between the trajectories of two nearby phase-space points increases. That rate, λ_1 ≡⟨ λ_1(t)⟩≡⟨ (dlnδ/dt)⟩ , is necessarily positive in a chaotic system.A more detailed description of rates of change of lengths and areas, and volumes, and hypervolumes of dimensionality up to that of the phase space itself, leads to definitions of additional Lyapunov exponents.The next exponent, λ_2 , is needed to describe the rate at which a typical phase-spacearea, defined by three nearby points, increases ( or decreases ) with increasing time, λ_1 + λ_2 ≡⟨ (dln A/dt)⟩ = ⟨ λ_1(t) + λ_2(t)⟩. Again an average over a sufficiently long time for convergence is required.Likewise the time-averaged rate of change of a three-dimensional phase volume defined by four neighboring trajectories is λ_1 + λ_2 + λ_3 . This sequence of rates and exponents continues for the rest of the spectrum. There are D exponents for a D-dimensional phase-space description. §.§ Local and Global Lyapunov-Exponent “Pairing” for Hamiltonian Systems The time-reversibility of Hamiltonian mechanics implies that all the rates of change change sign if the direction of time is reversed.This suggests, for instance, that all the exponents, { λ } and { λ(t)} , are “paired”, with the rates forward in time opposite to those backward in time.This turns out to be “true” for the long-time-averaged exponents but could be “false” for the local exponents. Local exponents depend upon the recent past history of neighboring trajectories.The global exponents, which describe the growth and decay of the principal axes of comoving hyperellipsoids in phase space are paired, though the time required to show this through numerical simulation can be long.This exponent pairing is the focus of the 2017 Snook Prize, as we detail in what follows.There is a vast literature describing and documenting the numerical evaluation and properties of Lyapunov spectra.The theoretical treatments are sometimes abstruse and lacking in numerical verification.This year's Prize Problem seeks to help remedy this situation.The numerical foundation for the study of Lyapunov exponents is an algorithm developed by Shimada and Nagashima in Sapporo<cit.> and Benettin in Italy, along with his colleagues Galgani, Giorgilli, and Strelcyn<cit.>, beginning in the late 1970s.Google indicates hundreds of thousands of internet hits for “Lyapunov Spectrum”.We mention only a few other references<cit.> here. The internet makes these and most of the rest readily available.§.§ The ϕ^4 Model for Chaos and Heat Conduction in SolidsAoki and Kusnezov popularized the ϕ^4 model as a prototypical atomistic lattice-based model leading to Fourier heat conduction<cit.>.In addition to a nearest-neighbor Hooke's-Law potential the model incorporates quartic tethers binding each particle to its own lattice site.Here we denote the displacements of the particles from their sites as { q_i} . In our one-dimensional case the spacing between the lattice sites does not appear in the Hamiltonian or in the equations of motion.In numerical work it is convenient to choose the spacing equal to zero while setting the particle masses, force constants for the pairs, and those for the tethers all equal to unity.For a four-particle problem in an eight-dimensional phase space the three-part Hamiltonian is :H = ∑_i=1^4 [(p_i^2/2) + (q_i^4/4)] + ∑_4^springs (q_i,j^2/2).The periodic boundary condition includes the spring linking particles 1 and 4 :q̈_1 = -q_1^3 + q_2 + q_4 - 2q_1;q̈_4 = -q_4^3 + q_1 + q_3 - 2q_4.See Figure 2 for two ways of visualizing the periodic boundary conditions of the ϕ^4 chain.The energy range over which chaos is observed in the ϕ^4 model includes about nine orders of magnitude<cit.>.The chaotic range for a four-body chain includes the two cases we discuss in the present work, { E = 8,288;(E/N) = 2, 72} . With both the springy pendulum and the ϕ^4 models in mind we turn next to a description of their chaotic properties.§ THE CHAOTIC DYNAMICS OF THE SPRINGY DOUBLE PENDULUMLike most smoothly-differentiable Hamiltonian systems the double springy pendulum has infinitely many periodicor quasiperiodic phase-space solutions surrounded by a chaotic sea. Dynamics in the sea is exponentially sensitive to perturbations.The dynamics occurs in an eight-dimensional phase space. Perturbations oriented along the trajectory or perpendicular to the energy surface, where there is no longtime growth at all, give two zeroes, so that the maximum number of nonzero Lyapunov exponents is six.Each positive exponent is necessarily paired with its negative twin, with the two changing roles if the direction of time is reversed. It is often stated that this time-reversible pairing links not only the time-averaged rates of the dynamics, but also the “local” or “instantaneous” rates<cit.>.Because chaotic pendulum problems give different local exponents if Cartesian and polar coordinates are used one might think that pairing could be hindered by using a mixture of these coordinates. To check on this idea we considered a mixed-coordinate Hamiltonian for the model of Figure 1 with polar coordinates for the “inside” Particle 1 :H = (1/2)[p_r^2 + (p_θ/r)^2 + p_x^2 + p_y^2] + y_2 + (κ/2)[(r-1)^2 + (r_12-1)^2];r_12 = √( x_2^2 + y_2^2 + r_1^2 - 2r_1x_2sin(θ_1) + 2r_1y_2cos(θ_1)) ;κ = 4.Formulating and solving the motion equations in mixed Cartesian and polar coordinates is an intricate error-prone task.It is useful first to solve the problem in Cartesian coordinates. That solution then provides a check for the more complicated mixed-coordinate case. Energy conservation is a nearly-infallible check of the programming.We computed spectra of Lyapunov exponents averaged over one billion fourth-order and one billionfifth-order Runge-Kutta timesteps, dt = 0.001 . This ensures that the numerical truncation errors of order (dt^5/120) or (dt^6/720) are of the same order as the double-precision roundoff error. We chose the initial condition of Figure 1 with both masses motionless at the support level, { x_1,y_1,x_2,y_2 } = { 1,0,2,0} , so that the initial potential, kinetic, and total energies all vanished. Only the outer Cartesian mass interacts with the gravitational field.The simplest numerical method for obtaining Lyapunov spectra<cit.> is first to generate a D-dimensional “reference trajectory” in the D-dimensional phase space.Then a set of D similar “offset” trajectories, an infinitesimal distance away, δ , are generated in the same space with numerical offset vectors of length δ = 0.00001 or 0.000001.While advancing the resulting D(D+1) D-dimensional differential equations the local Lyapunov exponents are obtained by “Gram-Schmidt” orthonormalization. This process rescales the vectors to their original length and rotates all but the first of them in order to maintain their orthonormal arrangement. The rescaling operation portion of the Gram-Schmidt process gives local values for the D Lyapunov exponents :λ_i(t) ≡ (-1/dt)ln(δ_i^after/δ_i^before);λ_i ≡⟨ λ_i(t)⟩ .For the type-2 double pendulum of Figure 1 the time-averaged Lyapunov spectrum is :{ λ } = { +0.143, +0.076. +0.034, 0.000, 0.000, -0.034, -0.076, -0.143} .The rms fluctuations in these rates are typically orders of magnitude larger than the rates themselves.The uncertainty in the exponents as well as the differences between exponents using fourth-order or fifth-order Runge-Kutta integrators with dt = 0.001 are both of order ± 0.001 . Our numerical work shows that the pairing of the exponents is maintained if one of the pendula is described by polar coordinates with the other pendulum Cartesian.The local exponents are different but still paired. § CONVERGENCE AND ORDERING OF LOCAL LYAPUNOV EXPONENTS The algorithm for generating the Lyapunov exponents<cit.> requires the ordering of D offset vectors in the vicinity of a reference trajectory.The first vector follows exactly the same motion equations with the proviso that its length is constant.The second vector, also of constant length, is additionally required to remain orthogonal to the first so that the combination of the two gives the rate of expansion or contraction of two-dimensional areas in the vicinity of the reference trajectory. In general the nth offset vector satisfies n constraints in all, keeping its own length constant while also maintaining its orthogonality to the preceding n-1 vectors.Although the local rates { λ(t)} associated with the vectors are necessarily ordered when time-averagedover a sufficiently long time to give the { λ } , this ordering is regularly violated, locally, as Figures 3 and 4 show. Offhand one would expect that increasing the Lyapunov exponents or decreasing the accuracy of the simulation would lead to more rapid convergence of the ordering of the vectors.For this reason we consider a model which is as simple as possible, with a relatively large chaotic range, and is easy to simulate. This ϕ^4 model, named for its quartic tethering potential, has proved particularly useful in the simulation of heat flow.We consider the equilibrium version of the model here, an isolated system.§ THE DYNAMICS OF ONE-DIMENSIONAL PERIODIC Φ^4 MODELS The simplest Lyapunov algorithm for the ϕ^4 model is exactly that used with the springy pendula.We follow D+1 trajectories in the D-dimensional phase space, rescaling them at every timestep to obtain the complete spectrum of D = 8 instantaneous Lyapunov exponents. This phase-space integration of nine trajectories, followed by Gram-Schmidt orthonormalization, can be modified by using Lagrange multipliers to impose the eight constant-length constraints and the (1/2)(8·7) = 28 orthogonality constraints. A third approach, particularly simple to implement for the ϕ^4 model with its power-law equations of motion, is to linearize the motion equations so that the offset vectors, rather than being small, can be taken as unit vectors in “tangent space”.By using separate integrators for the “reference trajectory” and for the eight unit vectors the programming is at about the same level of difficulty as is that of the straightforward phase-spaceapproach.We implemented both approaches for the ϕ^4 problems and found good agreement for the Lyapunov spectra at a visual level, even for calculations using a billion timesteps.This is because the reference trajectories for the phase-space and tangent-space algorithms are identical.§ USEFUL INTEGRATION TECHNIQUES Fourth-order and fifth-order Runge-Kutta integrators are particularly useful algorithms for small systems.First, these integratorsare easy to program. These integrators are also explicit, a real simplification whenever a variable timestep is desirable. Their errors are typically opposite in sign. For the simple harmonic oscillator the fourth-order energy decays while the fifth-order energy diverges. By choosing a sufficiently small timestep, for which the two algorithms agree, one can be confident in the accuracy of the trajectories.Another useful technique is adapative integration : comparing solutions with a single timestep dt to those from two successive half steps with (dt/2).The timestep is then adjusted up or down by a factor of two whenever it is necessary to keep the root-mean-squared error in a prescribed band, 10^-12 >error > 10^-14 for instance.<cit.>At the expense of about a factor of fifty in computer time, FORTRAN makes it possible to carry out quadruple-precision simulations with double-precision programming by changing the gnu compiler command :gfortran- O- oxcodecode.f⟶ gfortran- O- oxcode - freal - 8 - real - 16code.fHere the FORTRAN program is code.f and the executable is xcode .§ THE 2017 IAN SNOOK PRIZE PROBLEM The springy pendula and ϕ^4 problems detailed here show that “pairing” is typically present after sufficient time, with that time sensitive to the largest Lyapunov exponent as well as to the initial conditions. There are several features of these introductory problems that merit investigation :[ 1 ] To what extent is there an unique chaotic sea ? Can the symmetry of the initial conditions limit the portion of phase space visited when the dynamics is chaotic ?[ 2 ] Within the ϕ^4 model's chaotic sea do the time-averaged kinetic temperatures{ T_i = ⟨ p_i^2⟩ } , agree for all the particles ? ( If not, a thermal cycle applying heat and extracting work from the chain could be developed so as to violate the Second Law.<cit.> )[ 3 ] Is the pairing time simply related to the Lyapunov exponents and the chain length ?[ 4 ] Is the accuracy of the pairing simply related to the accuracy of the integrator ?The next and last question, which motivated this year's Prize Problem seems just a bit more difficult : [ 5 ] Can relatively-simple autonomous Hamiltonian systems be devised for which long-time local pairing is absent ? Our exploratory work has suggested that dynamical disturbances induced by collisions, with those collisions separated by free flight, could lead to repeated violations of pairing<cit.>. On the other hand Dettmann and Morriss have published a proof of pairing for isokinetic systems<cit.>.A simple gas of several diatomic or triatomic molecules is likely to be enough to settle that question.The 2017 Ian Snook Prize will be awarded to the most interesting paper discussing and elucidating these questions.Entries should be submitted to Computational Methods in Science and Technology, cmst.eu, prior to 1 January 2018. The Prize Award of 500 United States dollars sponsored by ourselves, and the Additional Ian Snook Prize Award, also 500, will be awarded to the author(s) of the paper best addressing this Prize Problem.§ ACKNOWLEDGMENTS We are grateful to the Poznan Supercomputing and Networking Center for their support of these prizes honoring our late Australian colleague Ian Snook (1945-2013).We also appreciate useful comments, suggestions, and very helpful numerical checks of our work furnished by Ken Aoki, Carl Dettmann, Clint Sprott, Karl Travis, and Krzysztof Wojciechowski.We particularly recommend Aoki's reference 10 for a comprehensive study of the dynamics of one-dimensional equilibrium ϕ^4 systems.99b1W. G. Hoover, C. G. Hoover, and H. A. Posch, “Lyapunov Instability of Pendula, Chains and Strings”, Physical Review A 41, 2999-3004 (1990).b2H. A. Posch, “Symmetry Properties of Orthogonal and Covariant Lyapunov Vectors and Their Exponents”, Journal of Physics A 46, 254006 (2013).b3I. Shimada and T. Nagashima, “A Numerical Approach to Ergodic Problems of DissipativeDynamical Systems”, Progress of Theoretical Physics 61, 1605-1616 (1979).b4G. Benettin, L. Galgani, A. Giorgilli, and J.-M. Strelcyn, “Lyapunov CharacteristicExponents for Smooth Dynamics Systems and for Hamiltonian Systems; a Method for Computing All of Them, Parts I and II: Theory and Numerical Application”, Meccanica 15, 9-20 and 21-30 (1980).b5J.-P Eckmann and D. Ruelle, “Ergodic Theory of Chaos and Strange Attactors”, Reviews of Modern Physics 57, 617-56 (1985).b6B. A. Bailey, “Local Lyapunov Exponents; Predictability Depends on Where You Are”, in Nonlinear Dynamics and Economics, W. A. Barnett, A. P. Kirman, and M. Salmon, editors (Cambridge University Press, 1996) pages 345-359.b7Hong-Liu Yang and Günter Radons, “Comparison of Covariant and Orthogonal Lyapunov Vectors”, Physical Review E 82, 046204 (2010) = arχiv:1008.1941.b8H. A. Posch and R. Hirschl, “Simulation of Billiards and of Hard-Body Fluids” in HardBall Systems and the Lorentz Gas, Encyclopedia of the Mathematical Sciences 101, edited by D. Szász (Springer Verlag, Berlin, 2000), pages 269-310.b9K. Aoki and D. Kusnezov, “Nonequilibrium Statistical Mechanics of Classical Lattice ϕ^4 Field Theory”, Annals of Physics 295, 50-80 (2002).b10K. Aoki, “Stable and Unstable Periodic Orbits in the One-Dimensional Lattice ϕ^4 Theory”, Physical Review E 94, 042209 (2016) .b11W. G. Hoover and K. Aoki, “Order and Chaos in the One-Dimensional ϕ^4 Model :N-Dependence and the Second Law of Thermodynamics”, Communications in Nonlinear Scienceand Numerical Simulation (in press, 2017) = arχiv 1605.07721.b12W. G. Hoover, J. C. Sprott, and C. G. Hoover, “Adaptive Runge-Kutta Integration for StiffSystems: Comparing Noséand Nosé-Hoover Dynamics for the Harmonic Oscillator”,American Journal of Physics 84, 786-794 (2016).b13Wm. G. Hoover and C. G. Hoover, “Time-Symmetry Breaking in Hamiltonian Mechanics”,Computational Methods in Science and Technology 19, 77-87 (2013) = arχiv 1302.2533.b14Wm. G. Hoover and C. G. Hoover, “What is Liquid? Lyapunov Instability Reveals Symmetry-BreakingIrreversibilities Hidden Within Hamilton's Many-Body Equations of Motion”, Condensed MatterPhysics 18, 1-13 (2015) = arχiv 1405.2485.b15C. P. Dettmann and G. P Morriss, “Proof of Lyapunov Exponent Pairing for Systems at ConstantKinetic Energy”, Physical Review E 53, R5545-R5548 (1996).
http://arxiv.org/abs/1703.00470v3
{ "authors": [ "William Graham Hoover", "Carol Griswold Hoover" ], "categories": [ "cond-mat.stat-mech", "nlin.CD" ], "primary_category": "cond-mat.stat-mech", "published": "20170227005748", "title": "Instantaneous Pairing of Lyapunov Exponents in Chaotic Hamiltonian Dynamics and the 2017 Ian Snook Prize" }
Image Stitching by Line-guided Local Warping with Global Similarity Constraint Tianzhu Xiang^1, Gui-Song Xia^1, Xiang Bai^2, Liangpei Zhang^1^1State Key Lab. LIESMARS, Wuhan University, Wuhan, China.^2Electronic Information School, Huazhong University of Science and Technology, China. ========================================================================================================================================================================================================================= The main contribution of this paper is an invariant extended Kalman filter (EKF) for visual inertial navigation systems (VINS). It is demonstrated that the conventional EKF based VINS is not invariant under the stochastic unobservable transformation, associated with translations and a rotation about the gravitational direction. This can lead to inconsistent state estimates as the estimator does not obey a fundamental property of the physical system. To address this issue, we use a novel uncertainty representation to derive a Right Invariant error extended Kalman filter (RIEKF-VINS) that preserves this invariance property. RIEKF-VINS is then adapted to the multi-state constraint Kalman filter framework to obtain a consistent state estimator. Both Monte Carlo simulations and real-world experiments are used to validatethe proposed method. § INTRODUCTIONVisual-Inertial Navigation Systems (VINS)have been of significant interest to the robotics community in the past decade, as thefusion of information from a camera and an inertial measurement unit (IMU) provides an effective and affordablesolution for navigation in GPS-denied environment. VINS algorithms can be classified into two categories, namely,filter basedandoptimization based. Althoughthere has been recent progress in the development of optimization based algorithms <cit.><cit.>, the extended Kalman filter (EKF) based solutions are still extensively used (e.g., <cit.><cit.><cit.><cit.>) mainly as a result of theirefficiency andsimplicity. It is well known that conventionalEKF based Simultaneous Localization and Mapping algorithms (EKF-SLAM) <cit.><cit.>suffer frominconsistency. Similarly it has been shown that the conventional EKF VINS algorithm (ConEKF-VINS) usingpoint features in the environment is also inconsistentresulting in the underestimation of the state uncertainty. This is closely related to the partial observability of these systems because conventional EKF algorithms do not necessarily guarantee this fundamental property <cit.><cit.> due to the linearized errors, which is the main reason for the overconfident estimates.This insight has been a catalyst for a number ofobservability-constraintalgorithms(e.g., <cit.><cit.><cit.>), that explicitly enforces the unobservability of the system along specific directions viathe modifications to the Jacobian matrices.Although the observability-constraintalgorithms improvethe consistency and accuracy of the estimator to some extent <cit.>, extra computations in the update stage are required.Bloesch et al. in <cit.> propose a robot-centric formulation to alleviate the inconsistency. Under the robot-centric formulation, the filter estimates the locations of landmarks in the local frame instead of that in the global frame. As a result,the system becomes fully observable so that this issue is inherently avoided. However, this formulation can result inlarger uncertainty and extra computations in the propagation stage, as discussed in <cit.><cit.>.Recently,the manifold and Lie group representations for three-dimensional orientation/pose have been utilized for solving SLAM and VINS. Bothfilter based algorithms (e.g., <cit.><cit.><cit.>) andoptimization based algorithms (e.g., <cit.><cit.>) canbenefit from the manifold representation and better accuracy can be achieved. The use of manifolddoes not only allow much easier algebraiccomputations (e.g., the computation of the Jacobian matrices) and avoid the representation singularity <cit.> but also have inspired a number of researchers to rethink the difference between the state representation and the state uncertainty representation, which is highlighted in<cit.><cit.>. In fact, this insight is alsointrinsically understood inthe well-known preintegration visual-inertial algorithm<cit.> although the algorithm does not usethe manifold representation. Fromthe viewpoint of control theory,Aghannan and Rouchon in <cit.> proposeaframework for designingsymmetry-preserving observers on manifoldsby using a subtle geometrically adapted correction term.The fusion of the symmetry-preserving theory and EKF has resulted inthe invariant-EKF (I-EKF), which possesses the theoretical local convergence property <cit.> and preserves the same invariance property of the original system.I-EKF based observers have been used in the inertial navigation <cit.> and the 2D EKF-SLAM <cit.><cit.>. Our recent work <cit.> also proves the significant improvement in the consistency througha 3D I-EKF SLAM algorithm.In this paper, we argue thatthe absence of the invariance affects the consistency of ConEKF-VINS estimates. There is a correspondence between this and the observability analysis reported in the previous literatures (e.g.,<cit.><cit.>). The invariance in thisrefers to “the output of the filter is invariant under any stochastic unobservable transformation". For the VINS system, the unobservable transformation is the rotation about the gravitational direction and the translations.Adopting the I-EKF framework, we propose the Right Invariant error EKF VINS algorithm (RIEKF-VINS) and prove that it is invariant. We then integrate RIEKF-VINS into the well-known visual-inertial odometry framework, i.e.,the multi-state constraint Kalman filter (MSCKF) and remedy the inconsistency of the MSCKF algorithm.Weshow using extensive Monte Carlo simulations the proposed method outperforms the original MSCKF, especially in terms of the consistency. A preliminary real-world experiment also demonstrates the improved accuracy of the proposed method.This paper is organized as follows. Section <ref> recalls the VINS system and gives an introduction of the ConEKF-VINS under the general continuous-discrete EKF. Section <ref> performs the consistency analysis of the general EKF algorithm basedon the invariance theory and proves the absence of the invariance of ConEKF-VINS.Section <ref> proposes RIEKF-VINS with the extension to the MSCKF framework.Section <ref>reports both the simulation and experiment results.Finally, Section <ref> includes the main conclusions of this work and future work. Appendix provides some necessary formulas used in the proposed algorithms and the proofs of the theorems. Notations:Throughout this paper bold lower-case and upper-case letters are reserved for column vectorsand matrices/tuples, respectively. To simplify the presentation, the vector transpose operators are omitted for the case 𝐀=[ 𝐚^⊺, 𝐛^⊺ ,⋯ , 𝐜^⊺ ]^⊺.The notation S(· ) denotes the skew symmetric operator that transforms a 3-dimensional vector into a skew symmetric matrix: S(𝐱)𝐲=𝐱×𝐲 for 𝐱, 𝐲∈ℝ^3, where the notation × refers to the cross product.§ BACKGROUND KNOWLEDGE In this section, we first provide an overview of the VINS system and then describe the ConEKF-VINS algorithmbased on the framework of the general continuous-discrete EKF. §.§ The VINS systemThe VINS system is used to estimate the state denoted as the tuple below𝐗= ( 𝐑, 𝐯, 𝐩, 𝐛_g, 𝐛_a,𝐟)where 𝐑∈𝕊𝕆(3) and 𝐩∈ℝ^3 are the orientation andposition of the IMU sensor, respectively, 𝐯∈ℝ^3 is the IMU velocity expressed in the global frame, 𝐛_g∈ℝ^3 is the gyroscope bias, 𝐛_a∈ℝ^3 is the accelerometer bias and 𝐟∈ℝ^3 is the coordinates of the landmarkin the global frame. Note that only one landmark is includedthe system state (<ref>) for a more concise notation.§.§.§ The continuous-time motion modelThe IMU measurements are usually used for state evolution due to its high frequency.The continuous-time motion model of the VINS system is given by the followingordinary differential equations (ODEs): 𝐗̇ = f( 𝐗, 𝐮, 𝐧)= ( 𝐑S(𝐰 - 𝐛_g-𝐧_g ), 𝐑(𝐚-𝐛_a - 𝐧_a )+𝐠 , 𝐯, 𝐧_bg , 𝐧_ba, 0)where𝐰∈ℝ^3 is the gyroscope reading,𝐚∈ℝ^3 is theaccelerometer reading,𝐠∈𝐑^3 is the global gravity vector (constant),and 𝐧= [ 𝐧_g, 𝐧_bg , 𝐧_a , 𝐧_ba ] isthe system noise modeled as a white Gaussian noise with the covariance matrix 𝐐:𝔼(𝐧 (t)𝐧(τ)^⊺) =𝐐δ(t-τ).Note that𝐮= ( 𝐰, 𝐚, 𝐠 ) is the time-varying system input and the IMU noise covariance 𝐐 is a constant matrix as prior knowledge.§.§.§ The discrete-time measurement modelThe visual measurement as thesystem output is discrete due to the low frequency of camera.After data association and rectification, the visual measurement of the landmark at time-step k∈ℕis available and given by𝐳_k = h (𝐗_k, 𝐧_z)=𝔥 (𝐑_k^⊺ (𝐟 -𝐩_k ) ) + 𝐧_zwhere 𝐧_z ∼𝒩(0, 𝐕_k) is the measurement noise. Note that𝔥 (·) := π∘𝐓_CI, where π denotes the projection function and𝐓_CI is thetransformationfromthe IMU frame to the camera frame.§.§ The general continuous-discrete EKFBeing a natural extension of the standard EKF, the general EKF allows more flexible uncertainty representation by the following:𝐗=𝐗̂⊕𝐞 and 𝐞∼𝒩(0, 𝐏 ) where (𝐗̂, 𝐏) can beregarded as the mean estimate and the covariance matrix, 𝐞 isa white Gaussian noise vector and thenotation ⊕ iscalled retraction in differentiable geometry <cit.>, coupled with the inverse mapping ⊖.Note thatthe user-defined operators ⊕ and ⊖ need to be designed such that 𝐗=𝐗⊕0 and𝐞=𝐗⊖𝐗̂. Here we also highlight that the choice of the retraction ⊕ has a significant contribution to the performance of the filter, as discussed in our previous work <cit.>. Once determining the retraction ⊕,the process of the general continuous-discrete EKF is similar to conventional continuous-discrete EKF, as summarized inAlg. <ref>. For propagation, we first calculate the time-varying Jacobians matrices 𝐅 and 𝐆from the linearized error-state propagation model: 𝐞̇ = 𝐅𝐞 + 𝐆𝐧 + o(𝐞𝐧).We then compute the state transition matrix Φ_n:=Φ(t_n+1, t_n) thatis the solution at time t_n+1 of the following ODE: d/dtΦ (t , t_n ) = 𝐅(t) Φ(t,t_n) with thecondition Φ(t_n, t_n) = 𝐈 at time t_n. The matrix 𝐐_d,n can be computed as 𝐐_d,n= ∫_t_n^t_n+1Φ(t_n+1, τ ) 𝐆(τ) 𝐐𝐆^⊺(τ)Φ^⊺(t_n+1, τ )dτ.§.§ ConEKF-VINS ConEKF-VINS<cit.> can be regarded as an instance of the general EKF algorithm (Alg. <ref>).In ConEKF-VINS,the uncertaintyrepresentation isdefined as𝐗=𝐗̂⊕𝐞 =( 𝐑̂exp( 𝐞_θ ) , 𝐯̂+𝐞_v, 𝐩̂+𝐞_p, 𝐛̂_g+𝐞_bg, 𝐛̂_a+𝐞_ba,𝐟̂+ 𝐞_f )where𝐞 =[ 𝐞_θ, 𝐞_v, 𝐞_p ,𝐞_bg, 𝐞_ba , 𝐞_f ]∼𝒩(0,𝐏) and exp(·) transforms a 3-dimensional vector into a rotation matrix, given in (<ref>).The matricesΦ_n, 𝐐_d,n and 𝐇_n+1are omitted here due to space reasons, which can be straightforwardly calculated in the sense of the uncertainty representation (<ref>). Pleaserefer to<cit.> for more details.§ CONSISTENCY ANALYSISIn this section, we first introduce the concepts of unobservable transformation,invariance and observability.We then perform the consistency analysis for the general EKF filter andprove that ConEKF-VINSdoes not have the expected invariance property. Moreover, we also discuss the relationship between invariance and consistency. §.§ Unobservability, unobservable transformation and invariance of the VINS system The concept observability of nonlinear systems can be traced to the early literature <cit.>. As discussed in the literatures <cit.><cit.><cit.>,the state (<ref>) of the VINS system is not locally observable. To make it more intuitive, we introduce the unobservability of the VINS systembased on the unobservable transformation rather than the observability rank criterionreported in <cit.>.The transformation 𝒯 iscalled to be an unobservable transformation for the VINS system andthe output of the VINS system (<ref>)–(<ref>) is invariant under 𝒯 when the following condition is satisfied: For arbitrary t_i such that 𝐘(t_i)=𝒯 (𝐗(t_i)), we haveh(𝐗(t_n), 0 ) = h(𝐘(t_n), 0)∀ n ≥ i where the notations𝐗(·) and 𝐘(·) denote the two evolutedtrajectories that follow the same ODEs(<ref>) with the conditions 𝐗(t_i) and 𝐘(t_i) at time t_i, respectively.On the other hand, the system is called to be unobservable if there exists an unobservable transformation.One can see that an unobservable system is always accompanied by an unobservable transformation. And the invariance to theunobservable transformation is a more detailed description of the unobservability. For the system state (<ref>), a stochastic transformation of translation and rotation (about the gravitational direction) 𝒯_𝐒is a mapping: 𝒯_𝐒 (𝐗)=( exp(𝐠 ( ϵ_1+θ_1 ))𝐑, exp(𝐠( ϵ_1+θ_1 ))𝐯,exp(𝐠( ϵ_1+θ_1 )) 𝐩+ θ_2+ϵ_2,𝐛_g, 𝐛_a, exp(𝐠( ϵ_1+θ_1 )) 𝐟+ θ_2+ϵ_2 ) where 𝐒= (θ, ϵ ), θ_1 ∈ℝ, θ_2 ∈ℝ^3, θ= [ θ_1 , θ_2 ]∈ℝ^4, ϵ_1 ∈ℝ, ϵ_2∈ℝ^3 and ϵ= [ ϵ_1 , ϵ_2 ]∈ℝ^4 is a white Gaussian noise with the covariance Σ. 𝒯_𝐒 degenerates into the deterministictransformation 𝒯_𝐃 (𝐃= (θ, 0 )) under the condition Σ=0. 𝒯_𝐒 degenerates into a stochastic identity transformation under the condition θ = 0,. The stochastic transformation 𝒯_𝐒 is an unobservable transformation to the VINS system (<ref>)–(<ref>).It can be straightforwardly verified.Theorem <ref> corresponds to the conclusion in <cit.><cit.> that the IMU yaw angle and the IMU positionare (locally) unobservable. §.§ The invariance of the general EKF based filter The general EKF based filter is not a linear system for the estimated state 𝐗̂. However, the invariance of the filtercan be described as the following: The outputof a general EKF framework based filter (Alg. <ref>) for the VINS system isinvariant under any stochastic unobservable transformation 𝒯_𝐒 if thefollowing condition is satisfied: for anytwoestimates (𝐗̂_i, 𝐏_i) and( 𝐘̂_i, 𝐏y_i ) at time-step i, where𝐘̂_i= 𝒯_𝐒 (𝐗̂_i ) and 𝐏y_i= 𝐌_i 𝐏_i𝐌^⊺_i+ 𝐍_i Σ𝐍^⊺_i in which 𝐌_i :=.∂𝒯_𝐃 (𝐗̂_i ⊕𝐞) ⊖𝒯_𝐃 (𝐗̂_i )/∂𝐞|_𝐞=0 and 𝐍_i:=.∂𝒯_𝐒 (𝐗̂_i ) ⊖𝒯_𝐃 (𝐗̂_i )/∂ϵ|_ϵ=0 we have h(𝐗̂_n,0)=h(𝐘̂_n,0) for all n ≥ i. The notations 𝐗̂_n and 𝐘̂_n above represent the mean estimate of thisfilter at time-step n by using the same input 𝐮 from time t_i to t_n,fromtheconditions (𝐗̂_i, 𝐏_i) and (𝐘̂_i, 𝐏y_i) at time-step i,respectively.As shown in Def. 1 and Def. 2,the invariance to any stochastic transformation 𝒯_𝐒 can be divided into two properties: 1) the invariance to any deterministictransformation 𝒯_𝐃 and 2) the invariance to any stochastic identity transformation. The following two theoremsanalytically provide the methods to judge whether a general EKF based filter has thetwo invariances properties above. The output of the general EKF based filter for the VINS system is invariant under anydeterministic unobservabletransformation only if for eachdeterministic unobservabletransformation 𝒯_𝐃, there exists an invertible matrix 𝐖_𝐃 (unrelated to 𝐗)such that 𝒯_𝐃(𝐗⊕𝐞 ) =𝒯_𝐃( 𝐗 ) ⊕𝐖_𝐃𝐞. See Appendix <ref>. The output of the general EKF based filter for the VINS system is invariant under anystochastic identitytransformation only if 𝐇_n+i+1Φ_n+iΦ_n+i-1⋯Φ_i𝐍_i = 0∀ n andi≥ 0. See Appendix <ref>.By using the theorems above, we can easily determine the invarianceproperties of ConEKF-VINS.ConEKF-VINSsatisfies (<ref>)but does not satisfies (<ref>). Hence,ConEKF-VINS has the invariance to any deterministic unobservabletransformation 𝒯_𝐃but not the invariance tostochastic identitytransformations.In all, the output ofConEKF-VINS is not invariant under stochastic unobservable transformation 𝒯_𝐒. For the ConEKF-VINS algorithm, the invarianceto the deterministic unobservable transformation 𝒯_𝐃 can be verified by using Theorem <ref>. The absence of invariance ofConEKF-VINS to stochastic identitytransformations can be verified by using Theorem <ref>. More details are omitted here.The previous literatures <cit.><cit.><cit.><cit.> directly perform the observability analysis of the filteron the linearized error-state model. However, Theorem <ref> and Theorem <ref>clarifies the relationship between thefilter andthe linearized error-state model. §.§ Consistency and invarianceThe unobservabilityin terms of stochastic unobservable transformation 𝒯_𝐒 is a fundamental property of the VINS system. Therefore a consistent filter (as a system for the estimated state 𝐗̂) is expected to mimic this property, i.e.,theoutput of a consistent estimator is invariant under any stochastic unobservable transformation.The invariance toto the deterministic transformation 𝒯_𝐃impliesthat the estimates from the filter do not depend on the selection of the (initial)mean estimate of the unobservable variables, i.e., the IMU yaw angle and the IMU position, essentially. Similarly, the invariance to stochastic identity transformation implies that the uncertainty w.r.t these unobservable variables does not affect the subsequentmean estimates. We can conclude that the consistency of a filter is tightly coupled with the invariance to stochastic unobservable transformation. Afilter that does not have the invariance property will gain the unexpected information andproduce inconsistent (overconfident) estimates.Note thatConEKF-VINS is a typical example due to the absence of the invariance property. § THE PROPOSED METHOD: RIEKF-VINSIn this section, we propose RIEKF-VINS by using a new uncertainty representation and prove it has the expectedinvariance properties. We then applyRIEKF-VINS to the MSCKF framework. §.§ The Uncertainty representation and JacobiansRIEKF-VINS also follows theframework(Alg. <ref>).The uncertainty representation ofRIEKF-VINS is defined as below𝐗= 𝐗̂⊕𝐞= ( exp( 𝐞_θ ) 𝐑̂ , exp(𝐞_θ)𝐯̂+J_r(-𝐞_θ)𝐞_v, exp(𝐞_θ) 𝐩̂+J_r(-𝐞_θ)𝐞_p,𝐛̂_g+𝐞_bg, 𝐛̂_a+𝐞_ba, exp(𝐞_θ) 𝐟̂+J_r(-𝐞_θ) 𝐞_f )where 𝐞 =[ 𝐞_θ, 𝐞_v , 𝐞_p , 𝐞_bg, 𝐞_ba , 𝐞_f ]∼𝒩(0,𝐏)and the right Jacobian operatorJ_r(·) isgiven in (<ref>). Note that this uncertainty representation intrinsically employs the Lie group so thatthe recent result (Theorem 2 of <cit.>) can be used to easily compute the Jacobians 𝐅 and 𝐆 of the propagation𝐅 =[0_3,30_3,30_3,3-𝐑̂0_3,30_3,3; S(𝐠)0_3,30_3,3 -S(𝐯̂)𝐑̂-𝐑̂0_3,3;0_3,3𝐈_30_3,3 -S(𝐩̂)𝐑̂0_3,30_3,3;0_3,30_3,30_3,30_3,30_3,30_3,3;0_3,30_3,30_3,30_3,30_3,30_3,3;0_3,30_3,30_3,30_3,30_3,30_3,3;] and𝐆 =[𝐑̂ 0_3,3 0_3,3 0_3,3; S(𝐯̂)𝐑̂ 0_3,3𝐑̂ 0_3,3; S(𝐩̂)𝐑̂ 0_3,3 0_3,3 0_3,3; 0_3,3 𝐈_3 0_3,3 0_3,3; 0_3,3 0_3,3 0_3,3 𝐈_3; S(𝐟̂)𝐑̂ 0_3,3 0_3,3 0_3,3; ].The measurement Jacobianis𝐇_n+1 = ∂𝔥 ( 𝐟̂_n+1,I) [ 0_3,6 -𝐑̂_n+1|n^⊺ 0_3,6𝐑̂_n+1|n^⊺ ]where 𝐟̂_n+1,I =𝐑̂_n+1|n^⊺(𝐟̂_n+1|n - 𝐩̂_n+1|n )∈ℝ^3. §.§ Invariance proof The output of RIEKF-VINSis invariant under any stochastic unobservable transformation 𝒯_𝐒. For the retraction defined in (<ref>), we have 𝒯_𝐃(𝐗⊕𝐞 ) =𝒯_𝐃( 𝐗 ) ⊕𝐖_𝐃𝐞 ∀ 𝐗 and 𝐞, where 𝐖_𝐃 = [ δ𝐑0_3,30_3,30_3,30_3,30_3,3;0_3,3 δ𝐑0_3,30_3,30_3,30_3,3; S(θ_2)δ𝐑0_3,3 δ𝐑0_3,30_3,30_3,3;0_3,30_3,30_3,3𝐈_30_3,30_3,3;0_3,30_3,30_3,30_3,3𝐈_30_3,3; S(θ_2)δ𝐑0_3,30_3,30_3,30_3,3 δ𝐑;]and δ𝐑 : = exp(𝐠θ_1). According to Theorem <ref>,the output of RIEKF-VINS is invariantunder any deterministic transformation𝒯_𝐃. On the other hand, for alli, we have Φ_i =[𝐈_3*0_3,3**0_3,3; Δ t_i S(𝐠)*0_3,3**0_3,3; Δ t_i^2/2 S(𝐠)*𝐈_3**0_3,3;0_3,3*0_3,3**0_3,3;0_3,3*0_3,3**0_3,3;0_3,3*0_3,3**𝐈_3;]and 𝐍_i =.∂𝒯_𝐒 (𝐗̂_i ) ⊖𝒯_𝐃 (𝐗̂_i )/∂ϵ|_ϵ=0 = [ 𝐠 0_3,3; 0_3,1 0_3,3; 0_3,1 𝐈_3; 0_3,1 0_3,3; 0_3,1 0_3,3; 0_3,1 𝐈_3 ]where Δ t_i:= t_i+1-t_i and the elements denoted by the notation * are omitted here because these do not have any contribution to the computation of Φ_i𝐍_i. Note that Φ_i𝐍_i= 𝐍_i+1 and 𝐇_i+1𝐍_i+1 = 0for all i and then we can easily verify that RIEKF-VINSsatisfies (<ref>). According to Theorem <ref>, the output of RIEKF-VINS is invariant under any stochastic identity transformation. The observability-constraint filters proposed in<cit.><cit.><cit.><cit.>artificially modify the transition matrix Φ_n and the measurement Jacobian 𝐇_n+1 to meet the condition (<ref>) such that they have the invariance to stochastic identity transformation. As a comparison, our proposed RIEKF-VINS employs the uncertainty representation (<ref>) such that the“natural"matrices Φ_n and𝐇_n+1can elegantlymeet the condition (<ref>).§.§ Application to MSCKFA drawback of ConEKF-VINS and RIEKF-VINS is the expensive cost of maintainingthe covariance matrix for a number of landmarks. Especially, RIEKF-VINS suffers from thecomplexity quadratic to the number of landmarks in the propagation stage. On the other hand, the well known MSCKF <cit.> that has the complexity linear to the number of landmarksinherits the inconsistency of ConEKF-VINS.One can see thatthe uncertainty w.r.t the global yaw has effects on the mean estimates in the MSCKF algorithm, unexpectedly. Due to the reasons above,we integrate RIEKF-VINS into the MSCKF framework such that the modified algorithm has the linear complexity and better consistency.For convenience, we call the modified filter as RI-MSCKF. In this subsection, we do not state all details of RI-MSCKF butpoint out the modifications.§.§.§ System state and retractionThe system state 𝒳_n at time-step n in RI-MSCKF is 𝒳_n= ( 𝐗̅_n, 𝐂_t_1, ⋯, 𝐂_t_j, ⋯, 𝐂_t_k,⋯,𝐂_t_m )where 𝐗̅_n =( 𝐑_n, 𝐯_n, 𝐩_n, 𝐛_g,n, 𝐛_a,n ) denotes the IMU state at time-step n,𝐂_t_i = ( 𝐑^c_t_i, 𝐩^c_t_i)∈𝕊𝔼(3) denotes the camera pose at the time t_i (t_i < t_n). According to the IMU state uncertainty in RIEKF-VINS,the uncertainty representation of 𝒳_n are defined as below𝒳_n= 𝒳̂_n ⊕𝐞 = ( 𝐗̂̅̂_n⊕_imu𝐞_I, 𝐂̂_t_1⊕_pose𝐞_c^1,⋯,𝐂̂_t_m⊕_pose𝐞_c^m)where 𝐞 =[ 𝐞_I , 𝐞_c ]∈ℝ^15+6m∼𝒩(0,𝐏_n),𝐞_I∈ℝ^15 and 𝐞_c = [ 𝐞_c^1, ⋯, 𝐞_c^m ]∈ℝ^6m. Note that⊕_imu and ⊕_pose are given in Appendix <ref>.§.§.§ PropagationThe mean propagation 𝒳_n+1|n of RI-MSCKF also follows thatof MSCKF whilethe covariance 𝐏_n+1|n is calculated by 𝐏_n+1|n =Φ̅_n^⊺𝐏_nΦ̅_n+ 𝐐̅_d,nwhere Φ̅_n = Diag(Φ^I_n,𝐈_6m ), 𝐐̅_d,n = Diag(𝐐^I_d,n,0_6m,6m ).Note thatΦ^I_n and 𝐐^I_d,n are the matrices from the first 15 rows and 15 columns of Φ_n and 𝐐_d,n, respectively,where Φ_n and 𝐐_d,n are the matrices of RIEKF-VINS.§.§.§ State augmentOncea new image is captured at time-step n+1, we augment the system state and the covariance matrix as the following: 𝒳̂_n+1|n←( 𝒳̂_n+1|n, 𝐂̂_t_n+1) 𝐏_n+1|n←[ 𝐈_15+6m; 𝐉 ]𝐏_n+1|n[ 𝐈_15+6m; 𝐉 ]^⊺where 𝐂̂_t_n+1 =(𝐑̂_n+1|nΔ𝐑 , 𝐑̂_n+1|nΔ𝐩+ 𝐩̂_n+1|n ) ∈𝕊𝔼(3) isthe mean estimate of camera pose at the time t_n+1, (Δ𝐑, Δ𝐩)∈𝕊𝔼(3)denotes the transformation from the camera to the IMU. Due to the new uncertainty representation (<ref>), the Jacobian𝐉 needs to be changed as below 𝐉= [𝐈_30_3,30_3,30_3,6 0_3,6m;0_3,30_3,3𝐈_30_3,6 0_3,6m ].§.§.§ UpdateNote that the landmark uncertaintyis coupled with the IMU pose in RIEKF-VINS. In RI-MSCKF,wedescribe the landmark uncertainty coupled with the camera pose 𝐂_t_j that earliest captures the landmarkwithin the current system state 𝒳_n as below( 𝐂̂_t_j, 𝐟̂) ⊕𝐞̅_c^j = (𝐂̂_t_j⊕_pose𝐞_c^j , 𝐞^j_θ𝐟̂+ J_r(- 𝐞^j_θ)𝐞_f)where𝐞̅_c^j=[ 𝐞_c^j ,𝐞_f ]=[ 𝐞^j_θ,𝐞^j_p, 𝐞_f]∈𝐑^9. From the uncertainty representations (<ref>) and (<ref>), we can compute the linearized measurement model for the visual measurement at time-step k (t_1 ≤t_k ≤ t_n). With a slight abuse of notations, thelinearized measurement model can be represented as belowπ( 𝐑̂^c ⊺_t_k (𝐟̂ - 𝐩̂^c_t_k ) )-𝐳_k ≈∂π_k 𝐇^*_xk𝐞_n+1|n +∂π_k 𝐇^*_fk𝐞_f+𝐕_k 𝐳̃_k ≈∂π_k 𝐇^*_xk𝐞_n+1|n +∂π_k 𝐇^*_fk𝐞_f +𝐕_k 𝐳̃_k ≈𝐇_xk𝐞_n+1|n + 𝐇_fk𝐞_f +𝐕_kwhere ∂π_k:= ∂π ( 𝐑̂_t_k^c ⊺ (f̂ -𝐩^c_t_k )), 𝐳_k is the measurement captured at the time t_k. Here the matrices𝐇^*_fk and 𝐇^*_xkare given by𝐇^*_fk=𝐑̂^c ⊺_t_k and𝐇^*_xk= [ ⋯ ⋯ ⋯ 𝐀 ⋯ 𝐁 ⋯ ⋯ ]where 𝐀= [ - 𝐑̂^c ⊺_t_k S(𝐟̂),0_3,3 ] and 𝐁= [ 𝐑̂^c ⊺_t_kS(𝐟̂) , - 𝐑̂^c ⊺_t_k ]. Due to the absence of the covariance of landmark, RI-MSCKF also uses the null-space trick on (<ref>) andthe resultingresidual equation 𝐇_fk^⊥𝐳̃_k ≈𝐇_fk^⊥𝐇_xk𝐞_n+1|n +𝐇_fk^⊥𝐕_k 𝐳̃'_k ≈𝐇'_xk𝐞_n+1|n +𝐕'_k is employed for update.RI-MSCKF does not need any extra computation to maintain the expected invariance while the observability-constraint algorithms need to explicitly project themeasurement Jacobians onto the observable space.§ SIMULATION AND EXPERIMENT §.§ Simulation Result In order to validate the theoretical contributions in this paper, we perform 50 Monte Carlo simulations and compare RI-MSCKF to MSCKF for a Visual-Inertial Odometry (VIO) scenario without loop closure.Consider that a robot equipped with an IMU and a camera moves in a specific trajectory (average speed is 3m/s) with the sufficient 6-DOFs motion, shown as the blue circles in Fig. <ref>. In this environment, 675 landmarks are distributed on the surface of a cylinder with radius 6.5m and height 4m shown as the green stars in Fig. <ref>. Under the simulated environment, the camera is able to observe sufficiently overlapped landmarks between consecutive frames. The standard deviation of camera measurement is set as 1.5 pixels. The IMU noise covariance 𝐐 is set as Diag( 0.008^2𝐈_3,0.0004^2𝐈_3, 0.019^2𝐈_3, 0.05^2𝐈_3 ) (the International System of Units).In each round of Monte Carlo simulation, the initial estimate is set asthe ground truth. And the measurements from IMU and camera are generated from the same trajectory with random noises.The maximal number of camera poses in the system stateof RI-MSCKF and MSCKF is set as 10. For robust estimation,we use the landmarks for the update step only whenthe landmarks are captured more than 5 times by the cameras within the current system state. The results of 50 Monte Carlo simulations are plotted in Fig. <ref>. We usethe root mean squareerror (RMS) and the average normalized estimation error squared (NEES) to evaluateboth accuracy and consistency, respectively. Note that the ideal NEES of orientation is 3 and that of pose is 6.As shown in Fig. <ref>, RI-MSCKF clearly outperforms MSCKF especially for the consistency. This phenomenoncan be explained asRI-MSCKF has the invariance property to stochastic rotation about the gravitational direction and thus it can reduce the unexpected information gain when compared to MSCKF. In addition, the RMS of orientation and position of both filters increase with the time because the loop closure in this simulation is turned off. §.§ Preliminary Experiment In order to validate the performance of the proposed RI-MSCKF algorithm under practical environments, we evaluate the algorithm on Euroc dataset <cit.> which is collected on-board a macro aerial vehicle in the indoor environments. Without a delicated designed front-end which handles the feature extraction and tracking perfectly, we selected sequence V2_01_easy in this section to demonstrate the performance of the RI-MSCKF algorithm where the features can be tracked correctly and thus making it perfect to compare our algorithm against theMSCKF algorithm. In this preliminary experiment, we designed a front-end based on ORB-SLAM <cit.> while only keeping the feature tracking sub-module. Without knowing the map points, new keyframe is inserted once there is n_frames frames have passedsince the insertion of the last keyframe. One sample image with the tracked landmarks is shown in Fig. <ref>.The uncertainty of the IMU sensor is set as instructed in the dataset. The maximal number of the camera poses in the system state is set as 10 and the minimal observed times for a landmark is set as 5.Fig. <ref> shows the estimated trajectories using MSCKF and RI-MSCKF. As shown in Fig. <ref> and indicated in Fig.<ref>, RI-MSCKF shows the similar accuracy of position compared with MSCKF but also avoids the drift in the last few frames of the sequence, however,RI-MSCKF shows significant better results in terms of orientation estimation accuracy compared with the original MSCKF algorithm.Even without a robust front-end to handle feature tracking perfectly, this preliminary experiment is able to demonstrate the superiority of RI-MSCKF compared with MSCKF algorithm in terms of the estimation accuracy. § CONCLUSION AND FUTURE WORK In this work,we proposed theRIEKF-VINS algorithm and stressed that the consistency of a filter is tightly coupled with the invariance property. Weproved that RIEKF-VINS has the expected invariance property while ConEKF-VINS does not satisfythis property. We also provided the methods to check whether a general EKF based filter has the invariance properties. After theoretical analysis, we integrated RIEKF-VINS into the MSCKF framework such that the resulting RI-MSCKF algorithmcan achieve better consistency relative to the original MSCKF. Monte Carlo simulationsillustratedthe significantly improved performance of RI-MSCKF, especially for the consistency. The real-world experiments also validated its improved accuracy. Future work includes improving the front endto achieve more robust estimation. We will also compare RIEKF-VINS to the observability-constraint algorithms in both simulations and real-world experiments.§.§ Some Formulasexp(𝐲)=𝐈_3+sin (𝐲 )/𝐲S(𝐲)+ 1-cos(𝐲) /𝐲^2S^2(𝐲) J_r(𝐲) =𝐈_3 -1-cos (𝐲 )/𝐲^2S(𝐲)+ 𝐲-sin(𝐲) /𝐲^3S^2(𝐲)for 𝐲∈ℝ^3. The notation ⊕_imu is defined as𝐗̅⊕_imu𝐞_I = (exp( 𝐞_θ ) 𝐑 , exp(𝐞_θ)𝐯+J_r(-𝐞_θ)𝐞_v,exp(𝐞_θ) 𝐩+J_r(-𝐞_θ)𝐞_p,𝐛_g+𝐞_bg,𝐛_a+𝐞_ba )where 𝐗̅=(𝐑,𝐯,𝐩,𝐛_g,𝐛_a) and 𝐞_I= [ 𝐞_θ,𝐞_v, 𝐞_p, 𝐞_bg, 𝐞_ba ]∈ℝ^15The notation ⊕_pose is defined as𝐂⊕_pose𝐞^i_c = (exp(𝐞^i_θ) 𝐑, exp(𝐞^i_θ) 𝐩+J_r(-𝐞^i_θ)𝐞^i_p)where 𝐂=(𝐑,𝐩)∈𝕊𝔼(3) and 𝐞^i_c=[ 𝐞^i_θ, 𝐞^i_p ]∈ℝ^6. §.§ Proof of Theorem <ref>Here we only prove the sufficient condition. It is assumed that this filter satisfies:for each deterministicunobservable transformation 𝒯_𝐃 there exists 𝐖_𝐃 such that 𝒯_𝐃(𝐗⊕𝐞 ) =𝒯_𝐃( 𝐗 ) ⊕𝐖_𝐃𝐞. For anyestimate (𝐗̂_i,𝐏_i) at time-step i, we have another estimate (𝐘̂_i,𝐏y_i)=( 𝒯_𝐃 (𝐗̂_i),𝐖_𝐃𝐏_i𝐖_𝐃^⊺)after applying the deterministic transformation 𝒯_𝐃.After one step propagation, we have (𝐗̂_i+1|i,𝐏_i+1|i) and (𝐘̂_i+1|i,𝐏y_i+1|i) where 𝐘̂_i+1|i= 𝒯_𝐃 (𝐗̂_i+1|i )and 𝐏𝐲_i+1|i=𝐖_𝐃𝐏_i+1|i𝐖_𝐃^⊺. Note that 𝐇y_i+1 = 𝐇_i+1𝐖_𝐃^-1 andthen it is easy to obtain 𝐊y=𝐖_𝐃𝐊, resulting in the mean estimate 𝐘̂_i+1as below𝐘̂_i+1 = 𝐘̂_i+1|i⊕𝐊_y 𝐳̃ =𝒯_𝐃( 𝐗̂_i+1|i)⊕𝐖_𝐃𝐊𝐳̃= 𝒯_𝐃 ( 𝐗̂_i+1|i⊕𝐊𝐳̃ )=𝒯_𝐃( 𝐗̂_i+1)The covariance matrix after updatebecomes𝐏y_i+1= (𝐈- 𝐊_y𝐇y_i+1 )𝐏y_i+1|i=𝐖_𝐃𝐏_i+1𝐖_𝐃^⊺. In all, 𝐘̂_i+1= 𝒯_𝐃( 𝐗̂_i+1)and𝐏y_i+1= 𝐖_𝐃𝐏_i+1𝐖_𝐃^⊺. By mathematical induction, we can see𝐘̂_n= 𝒯_𝐃 (𝐗̂_n ) for n≥ i and hencethe output ofthis filteris invariant under any deterministictransformation 𝒯_𝐃. §.§ Proof of Theorem <ref>Here we only prove the sufficient condition.It is assumed that this filter satisfies: 𝐇_n+i+1Φ_n+iΦ_n+i-1⋯Φ_i𝐍_i = 0∀ n andi≥ 0. For anyestimate (𝐗̂_i,𝐏_i) at time-step i, we have another estimate (𝐘̂_i,𝐏y_i)=( 𝐗̂_i ,𝐏_i +𝐍_i Σ𝐍_i^⊺ )after applying the stochastic identify transformation 𝒯_𝐒 where 𝐒= ( 0, ϵ ) andϵ∼𝒩(0, Σ). After one step propagation,we have(𝐗̂_i+1|i,𝐏_i+1|i) and (𝐘̂_i+1|i,𝐏y_i+1|i)= (𝐗̂_i+1|i,𝐏_i+1|i+ Φ_i𝐍_i Σ𝐍_i^⊺Φ_i^⊺ ). Note that𝐇_i+1Φ_i𝐍_i =0,we can easily get (𝐘̂_i+1,𝐏y_i+1)= (𝐗̂_i+1,𝐏_i+1+ Φ_i𝐍_i Σ𝐍_i^⊺Φ_i^⊺ ). By mathematical induction, we have (𝐘̂_n,𝐏y_n)= (𝐗̂_n,𝐏_n+ Φ_n⋯Φ_i𝐍_i Σ𝐍_i^⊺Φ_i^⊺⋯Φ_n^⊺). Therefore,the output of this filter is invariant under any stochastic identify transformation. IEEEtran
http://arxiv.org/abs/1702.07920v2
{ "authors": [ "Teng Zhang", "Kanzhi Wu", "Daobilige Su", "Shoudong Huang", "Gamini Dissanayake" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20170225165131", "title": "An Invariant-EKF VINS Algorithm for Improving Consistency" }
CRSTIP - An Assessment Scheme for Security Assessment Processes Arthur-Jozsef Molnar Info World Bucharest, Romania arthur.molnar@infoworld.ro Jürgen Großmann Fraunhofer FOKUSBerlin, Germanyjuergen.grossmann@fokus.fraunhofer.de ========================================================================================================================================================================================================= Complex networked systems are an integral part of today's support infrastructures. Due to their importance, these systems become more and more the target for cyber-attacks, suffering a notable number of security incidents. Also, they are subject to regulation by national and international legislation. An operator of such an infrastructure or system is responsible for ensuring its security and correct functioning in order to satisfy customers. In addition, the entire process of risk and quality control needs to be efficient and manageable. This short paper introduces the Compliance, Risk Assessment and Security Testing Improvement Profiling (CRSTIP) scheme. CRSTIP is an evaluation scheme that enables assessing the maturity of security assessment processes, taking into consideration systematic use of formalisms, integration and tool-support in the areas of compliance assessment, security risk assessment and security testing. The paper describes the elements of the scheme and their application to one of the case studies of the RASEN research project.compliance assessment, risk assessment, security testing § INTRODUCTIONResearchers within the RASEN [The FP7 RASEN project, http://www.rasenproject.eu] project develop methods dedicated to supporting companies and organizations in undertaking risk analysis for large scale, networked systems. These methods cover security risk assessments on different levels of abstraction and from different perspectives. Compliance assessment especially addresses compliance of products and processes for which regulations are in effect. Security risk assessment deals with the concise assessment of security threats, estimating the probabilities and consequences for a set of technical or business related assets. Finally, security testing can be used to examine the target under assessment, be it an organization or system for actual weaknesses or vulnerabilities. While the industry demands integrative approaches that cope with security as a whole, currently no established process exists that sufficiently emphasizes the systematic integration of compliance assessment, security risk assessment and security testing. Within the RASEN project we aim to close this gap by developing an integrated security assessment framework based on compliance assessment, security risk assessment and security testing. The resulting framework will be evaluated using three industrial case studies.Currently, there exist a number of methods to evaluate the maturity and quality of test and assessment processes. The most known representative is the Test Process Improvement (TPI) and its successor TPI NEXT <cit.>. Both schemes are trademarks of SOGETI <cit.> and have been applied to assess industrialprocesses across the world. Another approach is the Test Maturity Model (TMM) and its successor Test Maturity Model integration (TMMi) <cit.>. However, both approaches emphasize on testing and do not sufficiently cover the aspects of compliance assessment and risk assessment as required to assess the RASEN approach.§ THE CRSTIP APPROACH FOR PROCESS EVALUATIONThe CRSTIP (Compliance and Risk Security Testing Improvement Profiling) evaluation scheme can be used to assess the readiness level of an organization, process or system with regards to four key areas: legal and compliance assessment, security risk assessment, security testing and tool support and integration. CRSTIP was initially used to assess the baseline of the RASEN use cases, that is their status quo before applying the techniques and tools that are developed within the project. It has been additionally used to express expectations regarding the progress within the four key areas for each of the case studies during the project's life time. The scheme will be used again in order to document the actual progress achived after deploying RASEN methodology and tooling.CRSTIP provides a simple, straightforward assessment with regards to the target's current positioning within the CRSTIP key areas. The approach is based on the general ideas of TMMi and TPI and previous work undertaken within the ITEA2 DIAMONDS project [The ITEA2 DIAMONDS project, http://www.itea2-diamonds.org], where it was limited to assess progress in selected key areas of security testing <cit.>.For each of the key areas we defined a performance scale with a four-level hierarchy that can be used to evaluate security assessment processes with respect to performance. Within each area, levels with a higher number represent an improvement over lower levels. We plan to further refine CRSTIP within our project in order to serve as liaison between project efforts and organizations seeking to improve their standing in the key areas addressed by RASEN. This paper details the CRSTIP key areas and levels, showing its initial application to the Medipedia system.The key areas and their levels are detailed in the following subsections. §.§ Key area - Legal and compliance assessmentThis key areas refers to the overall process that is employed with the objective of adhering to the requirements of laws, to industry and organizational standards and codes, to principles of good governance and accepted community as well as to ethical standards.The overall process should support, to the extent possible, the documentation of compliance with these laws, rules and norms. The levels of this key area are: Level 1: Ad-hoc. The compliance assessment is unstructured, does not use a defined compliance process, and compliance decisions are made primarily on an event-driven basis.Level 2: Check list based. The checklist-based compliance assessment uses a checklist to answer a set of standard questions or to tick checkboxes.Level 3: Systematic.A systematic compliance assessment follows a structured and planned approach where there is a defined process and structured documentation of compliance. Generally, the process involves the identification of compliance requirements, evaluation of the compliance issues and taking measures to ensure compliance.Level 4: Systematic and risk-driven. A systematic and risk-driven compliance assessment involves a defined process for risk-driven compliance where requirements are prioritized based on their risks. This approach is supported by a systematic documentation that enables the mapping of different risks and controls to relevant compliance requirements. §.§ Key area - Security risk assessmentRisk assessment is the overall process of risk identification, estimation and evaluation. Risk identification is the process of finding, recognizing and describing risks. This involves identifying sources of risk, areas of impact and events, together with their causes and potential consequences. Risk estimation is the process of comprehending the nature of risk and determining its level. Finally, risk evaluation is the process of comparing the results of risk estimation with risk criteria to determine whether the magnitute of risk is acceptable. Risk evaluation assists in decisions about risk treatment. The levels of this key area are: Level 1: Checklist. Risk assessment mainly consists of answering a sequence of questions or filling in a form.Level 2: Qualitative. Risk assessment is based on qualitative risk values. The value descriptions or distinctions are based on some quality or characteristic rather than on some quantity or measured value. Level 3: Quantitative. Risk assessment is based on quantitative values. The values are based on some quantity or number, e.g. a measurement, rather than on some quality. Level 4: Real time. Risk assessment is done in real-time based on an underlying, computerized monitoring-infrastructure. §.§ Key area - Security testingSecurity testing is used to empirically check software implementations with respect to their security properties and resistance to attack. Functional security testing is used to check the functionality, efficiency and availability of security features of a dedicated test item. Security vulnerability testing directly addresses the identification and discovery of system vulnerabilities. It targets the identification of design flaws and implementation faults that can harm the availability, confidentiality and integrity of the test item. The levels of this key area are: Level 1: Unstructured.Unstructured security testing is performed either by the development team or the testing team without planning or documentation. The tests are intended to be run only once, unless a defect is discovered. The testing is neither systematic nor planned. Defects found using this method may be harder to reproduce. Level 2: Planned. Planned security testing is performed either by the development team or the testing team after a structured test plan has been elaborated. A test plan documents the scope, approach, and resources that will be used for testing.Level 3: Risk based. Security tests are planned and executed, either by the development team or by the testing team. The planning of security testing is done on the basis of the security risk assessment using impact estimations or likelihood values to focus the testing process.Level 4: Continuous risk based.Continuous risk based security testing is a process of continuously monitoring and testing a system with respect to potential vulnerabilities. Security risk analysis results are still used to focus the security testing and optimize resource planning. Any evolution of the system, of its environment or of the identified threats leads to updated security tests so that vulnerabilities can be detected throughout the whole life cycle of the test item.§.§ Key area - Tool support and integrationThis key area describes the degree of tool support and integration available for the above mentioned areas. Typically, tools work on their own data structures that are well suited to the task which needs to be performed with or by the tool. Tool integration is the ability of tools to cooperate by exchanging data or sharing a common user interface. The levels of this key area are: Level 1: None. No tool support in any of the above mentioned key areas is available.Level 2: Stand-alone. Tools are available for some of the previously mentioned key areas. However, the tools are not integrated thus they do not exchange data nor do they share the same user interface.Level 3: Partially integrated. Tools are available for some of the above mentioned key areas. Tool integration is based on point-to-point coalitions between tools. Point-to-point coalitions are often used in small and ad-hoc environments but have problems when it comes to more tools and larger environments as they do not scale. Level 4: Integrated. Tools are available for nearly all the key areas. Tool integration is based on central integration platforms and repositories that provide a common set of interfaces and data definitions to be exchanged.§ THE MEDIPEDIA CASE STUDY Medipedia[http://www.medipedia.ro/] is an eHealth web portal developed by Info World that differentiates itself on the market by allowing users to build and manage their personal electronic healthcare record. As complex networked software system, Medipedia has over 36000 active users and must fulfil legal requirements with regards to processing highly-sensitive personal data such as medical analyses results and diagnostic history. As a case study system for RASEN, we have employed CRSTIP to Medipedia in the following way: first, we evaluated the baseline, shown in Figure <ref>, as "Before RASEN". Then, based on preliminary project results we estimate the benefits of implementing RASEN, shown on the same figure.As the system processes sensitive customer data, key areas already present maturity. However, it is clear that a structured approach will benefit Medipedia in virtually all of them. First of all, while the system is legally compliant, a structured approach enables Info World to better prepare for upcoming regulations such as the General Data Protection Regulation<cit.> and facilitates cornering new markets having different regulations. Furthermore, while the system undergoes planned security testing and periodical risk assessment, there is no interplay between these activities. A structured risk assessment process that enables Info World to guide testing and which can be updated using test results facilitates bringing new features to market faster. The final key area concerns software tooling, where Info World recognizes the advantages supportive tooling would bring to its risk assessment and testing processes. § CONCLUSION AND OUTLOOKCRSTIP was developed as an objective analysis and evaluation scheme of the research and development within RASEN. Currently we have used it to assess the case studies' baseline and to outline progress expectations for the end of the project. We believe that in its current form, CRSTIP is a useful tool which stakeholders can use to asses a target organization, process or product. More so, as shown above, the scheme can be used to gain understanding about which areas are most suitable for further investment and how the levels in the different key areas relate or require each other.Furthermore, we envision using CRSTIP as a dissemination tool for RASEN technologies, as it allows identifying maturity levels with respect to key security and compliance areas. Ideally, a concise description for each of the key areas should be available that denote the techniques and tools that can be used to drive the improvement as well as the requirements to other key areas that are the precondition to improve from one level to the next. As future work, our desire is to provide a web-based implementation where users are able to fill in their assessment and obtain information regarding the requirements for moving to the next level in their areas of interest. 1 DIAMONDS DIAMONDS, The ITEA2 DIAMONDS project, http://www.itea2-diamonds.org, 2013. RASEN RASEN, The FP7 RASEN project, http://www.rasenproject.eu, 2014. TPINext R. Marselis, R.  van der Ven, TPI NEXT CLUSTERS FOR CMMI, http://www.tmap.net/sites/tmap.net/files/attachments/ TPI___NEXT_clusters_for_CMMi_0.pdf, 2009.TMMI R. van Veenendaal, Test Maturity Model integration, http://www.tmmi.org/pdf/TMMi.Framework.pdf, 2012.SOGETI SOGETI, Website of SOGETI, http://www.sogeti.nl/, 2009GDPR Reform of data protection rules, http://ec.europa.eu/justice/newsroom/data-protection/news/120125_en.htm, 2012
http://arxiv.org/abs/1702.08006v1
{ "authors": [ "Arthur-Jozsef Molnar", "Jürgen Großmann" ], "categories": [ "cs.CR", "C.2.0" ], "primary_category": "cs.CR", "published": "20170226091006", "title": "CRSTIP - An Assessment Scheme for Security Assessment Processes" }
[pages=1-last]paper
http://arxiv.org/abs/1702.08249v1
{ "authors": [ "Olivier Bachem", "Mario Lucic", "S. Hamed Hassani", "Andreas Krause" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170227120341", "title": "Uniform Deviation Bounds for Unbounded Loss Functions like k-Means" }
sort compress acmjacmlanguage=Java, basicstyle=, columns=flexiblegrammar dfnDefinition thmTheorem querycounter query querycounterQuery .fig/[label1]Laboratoire d'informatique formelle, Université du Québec à Chicoutimi, Canada Many problems in Computer Science can be framed as the computation of queries over sequences, or “streams” of data units called events. The field of Complex Event Processing (CEP) relates to the techniques and tools developed to efficiently process these queries. However, most CEP systems developed so far have concentrated on relatively narrow types of queries, which consist of sliding windows, aggregation functions, and simple sequential patterns computed over events that have a fixed tuple structure. Many of them boast throughput, but in counterpart, they are difficult to setup and cumbersome to extend with user-defined elements.This paper describes a variety of use cases taken from real-world scenarios that present features seldom considered in classical CEP problems. It also provides a broad review of current solutions, that includes tools and techniques going beyond typical surveys on CEP. From a critical analysis of these solutions, design principles for a new type of event stream processing system are exposed. The paper proposes a simple, generic and extensible framework for the processing of event streams of diverse types; it describes in detail a stream processing engine, called BeepBeep, that implements these principles. BeepBeep's modular architecture, which borrows concepts from many other systems, is complemented with an extensible query language, called . The end result is an open, versatile, and reasonably efficient query engine that can be used in situations that go beyond the capabilities of existing systems.8pt Keywords: event processing, software testing, query languages, runtime verification [numbers=left] public class SiddhiExample public static void main(String[] args)SiddhiManager man = new SiddhiManager(); String query = "foo"; ExecutionPlanRuntime epr = man.createExecutionPlanRuntime(query); epr.addCallback("query", new SiddhiCallback()); InputHandler inputHandler = epr.getInputHandler("trace"); epr.start(); epr.shutdown(); man.shutdown(); class MyCallback extends QueryCallback public void receive(long ts, Event[] in_e, Event[] rm_e)[numbers=left] public class EsperExamplepublic static void main(String[] args)Configuration conf = new Configuration(); conf.addEventType("TupleEvent", TupleEvent.class.getName()); EPServiceProvider epService =EPServiceProviderManager.getProvider("MyURI", conf); epService.initialize(); EPStatement statement =epService.getEPAdministrator().createEPL("foo"); statement.addListener(new EsperListener()); Scanner scanner = new Scanner(new File("trace.csv")); while (scanner.hasNextLine())String[] parts = line.split(","); TupleEvent e = new TupleEvent(scanner.nextLine()); epService.getEPRuntime().sendEvent(e);scanner.close();class TupleEventint a; int b;public TupleEvent(String line) String[] parts = line.trim().split(",");a = Integer.parseInt(parts[0]);b = Integer.parseInt(parts[1]); public int getA()return a;public int getB()return b; class MyListener extends UpdateListener public void update(EventBean[] in_e, EventBean[] ol_e) // Process output event herename tool throughput3*T1.0.0S1 T1.14.1BeepBeep T1.14.23903962-3T1.0.1Esper T1.0.25008472-3T1.1.1Siddhi T1.1.25468873*T1.3.0S2 T1.15.1BeepBeep T1.15.25431132-3T1.2.1Esper T1.2.27845302-3T1.3.1Siddhi T1.3.24272063*T1.16.0S3 T1.16.1BeepBeep T1.16.25205142-3T1.4.1Esper T1.4.27350822-3T1.5.1Siddhi T1.5.25129543*T1.6.0S4 T1.17.1BeepBeep T1.17.24737152-3T1.6.1Esper T1.6.23062072-3T1.7.1Siddhi T1.7.24739393*T1.19.0S5 T1.19.1BeepBeep T1.19.2822952-3T1.8.1Esper T1.8.21669742-3T1.9.1Siddhi T1.9.23919883*T1.10.0S6 T1.18.1BeepBeep T1.18.23194792-3T1.10.1Esper T1.10.26577332-3T1.11.1Siddhi T1.11.25016523*T1.12.0S7 T1.20.1BeepBeep T1.20.2581052-3T1.12.1Esper T1.12.21380602-3T1.13.1Siddhi T1.13.21671004*T1.22.0Temporal query T1.21.1BeepBeep 2-3T1.23.1Esper 2-3T1.24.1MySQL 2-3T1.22.1Siddhi name MySQL BeepBeep Esper SiddhiT2.0.0S1 T2.0.1 T2.0.2390396 T2.0.3500847 T2.0.4546887T2.1.0S2 T2.1.1 T2.1.2543113 T2.1.3784530 T2.1.4427206T2.2.0S3 T2.2.1 T2.2.2520514 T2.2.3735082 T2.2.4512954T2.3.0S4 T2.3.1 T2.3.2473715 T2.3.3306207 T2.3.4473939T2.4.0S5 T2.4.1 T2.4.282295 T2.4.3166974 T2.4.4391988T2.5.0S6 T2.5.1 T2.5.2319479 T2.5.3657733 T2.5.4501652T2.6.0S7 T2.6.1 T2.6.258105 T2.6.3138060 T2.6.4167100T2.7.0Temporal queryname MySQL BeepBeep Esper SiddhiT3.0.0S1 T3.0.1 T3.0.21.0 T3.0.31.2829204 T3.0.41.400852T3.1.0S2 T3.1.1 T3.1.21.271314 T3.1.31.8364209 T3.1.41.0T3.2.0S3 T3.2.1 T3.2.21.0147382 T3.2.31.4330369 T3.2.41.0T3.3.0S4 T3.3.1 T3.3.21.5470417 T3.3.31.0 T3.3.41.5477732T3.4.0S5 T3.4.1 T3.4.21.0 T3.4.32.028969 T3.4.44.7632055T3.5.0S6 T3.5.1 T3.5.21.0 T3.5.32.0587676 T3.5.41.570219T3.6.0S7 T3.6.1 T3.6.21.0 T3.6.32.3760433 T3.6.42.8758283T3.7.0Temporal querytool sizeT4.0.0BeepBeep T4.0.1317T4.2.0Esper T4.2.15870T4.3.0SASE T4.3.1183T4.1.0Siddhi T4.1.17140Number of pipes ThroughputT5.0.00 T5.0.11878992T5.1.04 T5.1.1649350T5.2.08 T5.2.1380430T5.3.012 T5.3.1289318T5.4.016 T5.4.1205829T5.5.020 T5.5.1159708T5.6.024 T5.6.1133944T5.7.028 T5.7.1123186T5.8.032 T5.8.1103111T5.9.036 T5.9.193477T5.10.040 T5.10.183450T5.11.044 T5.11.172442T5.12.048 T5.12.171552T5.13.052 T5.13.166963T5.14.056 T5.14.160245T5.15.060 T5.15.154637P1.0 < g r a p h i c s > P2.0 < g r a p h i c s > P3.0 < g r a p h i c s > D.2.2Software EngineeringDesign Tools and Techniques D.2.4Software EngineeringSoftware/Program Verification H.3.5In­for­ma­tion Storage and RetrievalOnline Information Services[web-based services] Theory, verification§ INTRODUCTIONEvent streams have become an important part of the mass of data produced by computing systems. They can be generated by a myriad of sources such as sensors <cit.>, business process logs <cit.>, instrumented software <cit.>, financial transactions <cit.>, healthcare systems <cit.>, and network packet captures <cit.>. The ability to collect and process these event streams can be put to good use in fields as diverse as software testing, data mining, and compliance auditing.Event stream processing typically involves computations that go beyond the evaluation of simple functions on individual events. Of prime importance is the possibility to perform correlations between events, either at multiple moments in time within a single stream, or even between events taken from different event streams. The term Complex Event Processing (CEP) has been coined to refer to computations of this nature. One of the goals of CEP is to create aggregated (i.e. “complex”) events using data fetched from one or more lower-level events <cit.>. This computation can be executed in cascade, with the output streams of one process becoming the input streams of the next, leading to events of increasingly higher levels of abstraction. Section <ref> starts this paper by presenting a wide range of examples taken from domains as varied as bug detection in video games and network intrusion detection.This potent concept has spawned an impressive amount of work in the past twenty years. As we will see in Section <ref>, there exist literally dozens of competing systems claiming the CEP label, ranging from academic proofs-of-concept to commercial cloud frameworks such as Apache Spark or Microsoft Azure. These systems are based on a commensurate number of research papers, technical reports and buzzword-laden whitepapers introducing a plethora of incompatible formalizations of the problem.This reveals that CEP has never been a single problem, but rather a family of related problems sharing a relatively blurry common ground.Many of these systems and frameworks, however, have in common the fact that they deserve the epithet “complex”. They often rely on an intricate definition of seemingly simple concepts; some of them don't even state them formally, making their available implementation a de facto specification. Many of their built-in query languages are quirky outgrowths of SQL, whose syntax seldom preserves backward-compatibility for operations identical to those found in relational databases. Almost none of them provides comprehensive means for extending their language syntax with user-defined constructs. Finally, some suffer from high setup costs, requiring hours if not days of arcane configuration editing and boilerplate code to run even the smallest example.While it is obvious that convincing use cases motivate the existence of these systems, the current state of things leaves a potential user between two uncomfortable extremes: embrace a complex Event Processing system, with all its aforementioned shortcomings, or do without and fall back to low-level scripting languages, such as Perl or Python, to write menial trace-crunching tasks. What seems to be missing is a “Simple Event Processing” engine, in the same way that a spreadsheet application like Microsoft Excel is often a satisfactory middle ground between a pocket calculator and a full-blown accounting system. Such a system should provide higher abstraction than hand-written scripts, an easy to understand computational model, zero-configuration operation and reasonable performance for light- to medium-duty tasks.This paper presents a detailed description of such a Simple Event Processing engine, called BeepBeep. In Section <ref>, we first describe the fundamental design principles behind the development of this system. Some of these principles purposefully distance themselves from trends followed by current CEP solutions, and are aimed at making the intended system both simpler and more versatile. Section <ref> then formally describes BeepBeep's computational model. This simple formalization completely describes the system's semantics, making it possible for alternate implementations to be independently developed. One of the key features of BeepBeep is its associated query language, called , which is described in Section <ref>. Substantial effort has been put in makingsimple and coherent; in accordance to BeepBeep's design principles, it strives towards relational transparency, meaning that queries that perform computations similar to relational database operations are written in a syntax that is backwards-compatible with SQL. Section <ref> then describes the various means of extending BeepBeep's built-in functionalities. A user can easily develop new event processing units in a handful of lines of code, and most importantly, define arbitrary grammatical extensions toto use these custom elements inside queries. Extensions can be bundled in dynamically-loaded packages called palettes; we describe a few of the available palettes, allowing BeepBeep to manipulate network captures, tuples, plots, and temporal logic operators, among others.Equipped with these constructs, Section <ref> then proceeds to showcase BeepBeep's functionalities. An experimental comparison of BeepBeep's performance with respect to a selection of other CEP engines is detailed in Section <ref>. To the best of our knowledge, this is the first published account of such an empirical benchmark of CEP engines on the same input data. These experiments reveal that, on a large number of examples, BeepBeep's versatility makes it able to tackle problems difficult to express with existing solutions; moreover, its simple formal foundations result in queries that are both easy to read, and are computed with reasonable throughput. § USE CASES FOR EVENT STREAM PROCESSINGComplex Event Processing (CEP) can loosely be defined as the task of analyzing and aggregating data produced by event-driven information systems <cit.>. A key feature of CEP is the possibility to correlate events from multiple sources, occurring at multiple moments in time. Information extracted from these events can be processed, and lead to the creation of new, “complex” events made of that computed data. This stream of complex events can itself be used as the source of another process, and be aggregated and correlated with other events.Event processing distinguishes between two modes of operation. In online (or “streaming”) mode, input events are consumed by the system as they are produced, and output events are progressively computed and made available. It is generally assumed that the output stream is monotonic: once an output event is produced, it cannot be “taken back” at a later time. In contrast, in offline (or “batch”) mode, the contents of the input streams are completely known in advance (for example, by being stored on disk or in a database). Whether a system operates online or offline sometimes matters: for example, offline computation may take advantage of the fact that events from the input streams may be indexed, rewinded or fast-forwarded on demand. Recently, the hybrid concept of “micro-batching” has been introduced in systems like Apache Spark Streaming (cf. Section <ref>). It is a special case of batch processing with very small batch sizes.Guarantees on the delivery of events in a CEP system can also vary. “At most once” delivery entails that every event may be sent to its intended recipient, but may also be lost. “At least once” delivery ensures reception of the event, but at the potential cost of duplication, which must then be handled by the receiver. In between is perfect event delivery, where reception of each event is guaranteed without duplication. These concepts generally matter only for distributed event processing systems, where communication links between nodes may involve loss and latency.In the following, we proceed to describe a few scenarios where event streams are produced and processed.§.§ Stock Ticker A recurring scenario used in CEP to illustrate the performance of various tools is taken from the stock market <cit.>. One considers a stream of stock quotes, where each event contains attributes such as a stock symbol, the price of the stock at various moments (such as its minimum price and closing price), as well as a timestamp. A typical stream of events of this nature is shown in Figure <ref>. This figure shows that events are structured as tuples, with a fixed set of attributes, each of which taking a scalar value. We shall see that many use cases have events structured as tuples, and that many event stream engines and query languages take for granted that events have a tuple structure.This simple example can be used to illustrate various queries that typically arise in an event stream processing scenario. A first, simple type of query one can compute over such a trace is called a snapshot query, such as the following: Get the closing price of msft for the first five trading days. The result of that query is itself a trace of tuples, much in the same way the relationalstatement on a table returns another table.A refinement of the snapshot query is the landmark query, whichreturns only events that satisfy some criterion, such as: Select all the days after the hundredth trading day, on which the closing price of msft has been greater than $50.This simple query highlights the fact that, in online mode, outputting a tuple may require waiting until more of the input trace is made available —and that waiting time is not necessarily bounded. In the worst case, msft may be the last stock symbol for which the price is known on a given day, and all events of that day must somehow be retained before knowing if they must be output in the result or discarded.In window queries, a computation is repeatedly made on a set of successive events. The size of that set is called the width of the window; the width is specified as a number of events or as a time interval. A sliding query is a particular case of window query where, after each computation, the window moves forward into the trace and a new set of successive events is considered. Often, as is the case in this example, the computation applied to the contents of the window is an aggregate function, such as a sum or an average. Systems such as LinQ <cit.> propose other types of window queries, such as the hopping query (also called a tumble window by <cit.>), where the window moves forward by exactly its width, such that no two windows ever overlap. For example: On every fifth trading day starting today, calculate the average closing price of msft for the five most recent trading days.Other windows include the latch, which maintains an internal state between window calculations. This is useful for calculations that are cumulative from the beginning of the stream.A join query involves the comparison of multiple events together. In the stock ticker example, a possible join query could be: For the five most recent trading days starting today, select all stocks that closed higher than msft on a given day.When computing the result of such a query, a tuple is added to the output result depending on its relationship with respect to the price of msft for the same day. In most CEP systems, this is done by an operation similar to theoperator in relational databases: the input stream is joined with itself, producing pairs of tuples (t_1,t_2) where t_1 belongs to the first “copy” of the stream, and t_2 belongs to the second. The join condition, in our example, is that the timestamps of t_1 and t_2 must be equal. Since traces are potentially infinite, join operations require bounds of some kind to be usable in practice; for example, the join operation may only be done on events of the last minute, or on a window of n successive events. §.§ Medical Records Management We now move to the field of medical record management, where events are messages expressed in a structured format called HL7 <cit.>. An HL7 message is a text string composed of one or more segments, each containing a number of fields separated by the pipe character (). The possible contents and meaning of each field and each segment is defined in the HL7 specification. Figure <ref> shows an example of an HL7 message; despite its cryptic syntax, this messages has a well-defined, machine-readable structure. However, it slightly deviates from the fixed tuple structure of our first example: although all messages of the same type have the same fixed structure, a single HL7 stream contains events of multiple types.HL7 messages can be produced from various sources: medical equipment producing test results, patient management software where individual medical acts and procedures are recorded, drug databases, etc. For a given patient, the merging of all these various sources produces a long sequence of HL7 messages that can be likened to an event stream. The analysis of HL7 event traces produced by health information systems can be used, among other things, to detect significant unexpected changes in data values that could compromise patient safety <cit.>. In this context, a general rule, which can apply to any numerical field, identifies whenever a data value starts to deviate from its current trend: Notify the user when an observed data field is three standard deviations above or below its mean. We call such computations trend queries, as they relate a field in the current event to an aggregation function applied on the past values of that field. Trend queries can be made more complex, and correlate values found in multiple events, such as the following: Notify the user when two out of three successive data points lie more than two standard deviations from the mean on the same side of the mean line. Although our example query does not specify it, this aggregation can be computed over a window as defined in our previous use case, such as the past 100 events, or events of the past hour.A slice query is the application of the same computation over multiple subsets (slices) of the input stream. In the present use case, assuming that the HL7 stream contains interleaved messages about multiple patients, a possible slice query could be to perform the outlier analysis mentioned above for each patient.Other applications of CEP in healthcare have been studied by Wang<cit.>. §.§ Online Auction Our next use case moves away from traditional CEP scenarios, and considers a log of events generated by an online auction system <cit.>. In such a system, when an item is being sold, an auction is created and logged using the (i,m,p) event, where m is the minimum price the item named i can be sold for and p is the number of days the auction will last. The passing of days is recorded by a propositional endOfDay event; the period of an auction is considered over when there have been p number of endOfDay events.The auction system generates a log of events similar to Figure <ref>. Although the syntax differs, events of this scenario are similar to the HL7 format: multiple event types (defined by their name) each define a fixed set of attributes.One could imagine various queries involving the windows and aggregation functions mentioned earlier. However, this scenario introduces special types of queries of its own. For example: Check that every bid of an item is higher than the previous one, and report to the user otherwise. This query expresses a pattern that correlates values in pairs of successive bid events: namely, the price value in any two bid events for the same item i must increase monotonically. Some form of slicing, as shown earlier, is obviously involved, as the constraint applies separately for each item; however, the condition to evaluate does not correspond to any of the query types seen so far. A possible workaround would be to add artificial timestamps to each event, and then to perform a join of the stream with itself on i: for any pair of bid events, one must then check that an increasing timestamp entails an increasing price.Unfortunately, in addition to being costly to evaluate in practice, stream joins are flatly impossible if the interval between two bid events is unbounded. A much simpler —and more practical— solution would be to simply “freeze” the last Price value of each item, and to compare it to the next value. For this reason, queries of that type are called freeze queries.The previous query involved a simple sequential pattern of two successive bid events. However, the auction scenario warrants the expression of more intricate patterns involving multiple events and multiple possible orderings: List the items that receive bids outside of the period of their auction. As one can see, this query refers to the detection of a pattern that takes into account the relative positioning of multiple events in the stream: an alarm should be raised if, for example, a bid for some item i is seen before the start event for that same item i. Simiarly, an occurrence of a bid event for i is also invalid if it takes place n endOfDay events after its opening, with n being the Duration attribute of the corresponding start event. We call such query a lifecycle query, as the pattern it describes corresponds to a set of event sequences, akin to what a finite-state machine or a regular expression can express. §.§ Electric Load Monitoring The next scenario touches on the concept of ambient intelligence, which is a multidisciplinary approach that consists of enhancing an environment (room, building, car, etc.) with technology (e.g. infrared sensors, pressure mats,etc.), in order to build a system that makes decisions based on real-time information and historical data to benefit the users within this environment. A main challenge of ambient intelligence is activity recognition, which consists in raw data from sensors, filter it, and then transform that into relevant information that can be associated with a patient's activities of daily living using Non-Intrusive Appliance Load Monitoring (NIALM) <cit.>.Typically, the parameters considered are the voltage, the electric current and the power (active and reactive). This produces a stream similar to Figure <ref>. An event consists of a timestamp, and numerical readings of each of the aforementioned electrical components.The NIALM approach attempts to associate a device with a load signature extracted from a single power meter installed at the main electrical panel. This signature is made of abrupt variations in one or more components of the electrical signal, whose amplitude can be used to determine which appliance is being turned on or off <cit.>. An example of query in this context could be: Produce a “Toaster On” event whenever a spike of 1,000±200 W is observed on Phase 1 and the toaster is currently off. Again, this scenario brings its own peculiarities. Here, events are simple tuples of numerical values, and slicing is applied in order to evaluate each signal component separately; however, the complex, higher-level events to produce depend on the application of a peak detection algorithm over a window of successive time points. Moreover, elements of a lifecycle query can also be found: the current state of each appliance has to be maintained, as the same peak or drop may be interpreted differently depending on whether a device is currently operating or not.While this scenario certainly is a case of event stream processing in the strictest sense of the term, it hardly qualifies as a typical CEP scenario, as per the available tools and their associated literature. As a matter of fact, we shall see later that no CEP engine directly provide the appropriate machinery to tackle a problem such as this one. §.§ Runtime Verification Our last use case considers event streams produced by the execution of a piece of software. Runtime verification is the process of observing a sequence of events generated by a running system and comparing it to some formal specification for potential violations <cit.>. It was shown how the use of a runtime monitor can speed up the testing phase of a system, such as a video game under development, by automating the detection of bugs when the game is being played <cit.>.We take as an example the case of a game called Pingus, a clone of Psygnosis' Lemmings game series. The game is divided into levels populated with various kinds of obstacles, walls, and gaps. Between 10 and 100 autonomous, penguin-like characters (the Pingus) progressively enter the level from a trapdoor and start walking across the area.The player can give special abilities to certain Pingus, allowing them to modify the landscape to create a walkable path to the goal. For example, some Pingus can become Bashers and dig into the ground; others can become Builders and construct a staircase to reach over a gap. Figure <ref> shows a screenshot of the game.2in <characters>i<character>ii<id>0</id>ii<action>faller</action>ii<isalive>true</isalive>ii<position>iii<x>1121</x><y>393</y>ii</position>ii<velocity>iii<x>0</x><y>3.6</y>ii</velocity>ii<groundtype>earth</groundtype>i</character>i...</characters> When running, the game updates the playing field about 150 times per second; each cycle of the game's main loop produces an XML snapshot of its state similar to the one shown in Figure <ref>. Hence, analyzing the execution of the game can be assimilated to processing the stream of individual XML events it generates. The abnormal execution of the game can be expressed as event stream query, looking for a pattern corresponding to bugs in the game. An example of an incorrect execution pattern could be: Make sure that a walking Pingu that encounters a Blocker turns around and starts walking in the other direction. This query is special in at least two respects. First, the Pingus use case introduces a new type of event unseen in previous examples. Indeed, the XML events produced by the game are not fixed tuples of name-value pairs, but rather contain nested substructures. Hence, in each event, theelement is repeated for as many Pingus as there are on the playing field; each such element contains the data (position, velocity, skills) specific to one character. It does not make sense, in this context, to talk about “the” ID inside an event, as it contains multiple such IDs. The contents of XML documents must therefore be accessed using a more sophisticated querying mechanism, such as XPath expressions. Moreover, events are unusually large: a single event can contain as much as ten kilobytes of XML data. Second, in order to detect this pattern of events, one must correlate the x-y position of two distinct Pingus (a Walker and a Blocker), and then make sure that the distance between the two increases over the next couple of events (indicating a turnaround).[One cannot simply look for a change of sign in velocity, as the turnaround may lag the “collision” by a few cycles of the game loop.] These computations go beyond the basic slicing and lifecycle queries studied in the previous examples.Furthermore, various kinds of analyses can also be conducted on the execution of the game. For example, one may be interested in watching the realtime number of Pingus possessing a particular skill, leading to a query such as: Determine the realtime proportion of all active Pingus that are Blockers. Such a query involves, for each event, the counting of all Pingus with a given skill with respect to the total number of Pingus contained in the event. Going even further, one may also divide the playing field into square cells of a given number of pixels, and count the Pingus that lie in each cell at any given moment, producing a form of “heat map”: Produce a heat map of the location of Pingus across the game field; update this map every three seconds. This last query outputs a stream of events of an unusual type, namely two-dimensional arrays of numerical values. Such arrays could then be passed to a plotting program that could display a graph in real time. §.§ Other Use Cases It is probably clear at this point that a large number of diverse problems can be re-framed as a form of computation over event streams of various kinds. Moreover, the last few examples have shown queries and event types that stretch what is generally meant by CEP in both research and practice.There exist many other use cases of event stream processing, which we mention only in passing. Microsoft's StreamInsight tutorial <cit.> considers toll booths along a road sending outevents whenever a car passes through the booth.Research on the Twitter platform has led to the development of TweeQL, a streaming SQL-like interface to the Twitter API, making common tweet processing tasks simpler <cit.>. Event streams have also been used to detect intrusions in a network infrastructure <cit.>, identify non-compliant behaviour of aircraft within a regulated airspace <cit.>, monitor an electrical grid <cit.>.The Runtime Verification community has defined a number of use cases with intricate sequential patterns over events produced by a running system. In addition to the online auction described above, past works have considered: the correct interleaving of method calls on a Java object according to its API <cit.>; the analysis of commands and responses sent by a spacecraft under test for the detection of bugs <cit.>; the analysis of real-world web service XML payloads <cit.>; the detection of fraudulent activity in an event log <cit.>; the analysis of system calls on traces of assembly instructions <cit.>. § STATE OF THE ART IN EVENT STREAM PROCESSINGWe shall now provide an overview of the available solutions for event stream processing. Recent and extended surveys of CEP engines already exist <cit.>; the goal of this paper is not to replicate such efforts. The main distinguishing point of this review is that it divides these solutions in two families: first, tools and research projects that have been developed as Complex Event Processing systems, and recognized as such; second, tools that have been developed by the Runtime Verification (RV) community, which present a significant overlap with event stream processing, but have been consistently overlooked by traditional reviews on CEP. §.§ Tools for Event Stream Processing A large number of CEP and related engines have been developed over the past decade. We describe some of them by emphasizing their distinguishing features. §.§.§ Aurora and Borealis One of the earliest systems is Aurora <cit.>. It defines eight primitive operations on streams of tuples. The window operates over sets of consecutive tuples, and applies a user-defined function to each window; four types of windows are supported (sliding, latch, tumble and resample, which interpolates new tuples between the original tuples of an input stream). Four other operators act on a single tuple at a time:the filter operator screens tuples in a stream for those that satisfy some predicate; map applies an input function to every tuple in a stream; group by partitions tuples across multiple streams into new streams whose tuples contain the same values over some input set of attributes; join pairs tuples from input streams whose difference in timestamps falls within some given interval. These primitive functions can be composed through a graphical user interface, where a user can create and connect boxes.Aurora can perform a run-time optimization of a graph of boxes. This is done by identifying a subset of the graph, hold all input messages at upstream connection points and drain the subgraph of events through all downstream connection points. Then, a few optimization strategies can be attempted. One of them is to analyze the attributes of the tuples considered in that part of the graph, and to insert a projection operation to remove from the input tuples all attributes that are not necessary. In addition, pairs of boxes can sometimes be merged into a single box for more efficiency, such as when two filtering operations are applied in sequence. In the same way as filters and projections are preferably applied first in a relational database query, the same shuffling of operations can also be attempted in an Aurora query in order to reduce the number or the size of tuples that need to be processed downstream. A global scheduler decides on what box is allowed to perform an execution step at each point in time. Boxes may be allocated to multiple threads. The decisions of the scheduler are informed by a Quality of Service (QoS) monitor, which can trigger load shedding strategies (such as dropping tuples) when QoS for a specific query degrades past a user-defined level.Aurora was followed by a multi-processor version called Borealis <cit.>. In Borealis, boxes are provided with special control lines in addition to their standard data input lines; these lines can carry information (such as new sets of parameters) that may change the box's behaviour during the execution of the query. Moreover, in Borealis an event stream can contain deletion messages, indicating that a tuple previously inserted in the stream is to be removed, and replacement messages that revise values for a tuples already inserted in the stream. Hence, streams are no longer monotonic, and query results can be updated following a corresponding update of the input stream. This is done by sending out one or more revision messages, computed from the input revision messages received. This feature is unique among all the systems considered in this review.References on Aurora and Borealis do not explicitly describe a query language, other than the boxes-and-arrows model described above. It is reported, however, that these projects led to the creation of SQLstream, an extension of the SQL language for the manipulation of tuple event streams. SQLstream can query relational databases using regular SQL syntax;to specify a query on a stream, one must use a keyword calledimmediately after . Theconstruct can be used to define windows and apply the standard SQL aggregation functions over that window. For example:o.orderid, (t.amount) OrderStream()oTradeStream( '1' )t o.orderid = t.tradeid (OrderStream.), o.orderid o.amount <> (t.amount); SQLstream also provides a special object called a pump. A pump provides a continuously runningstreamquery functionality, thereby enabling the results of a query to be continuously entered into another stream. In other words, a pump pulls data from a stream, and pushes a transformed version into another stream.SQLstream is supported by a commercial product called SQLstream Blaze, which is part of the Amazon Kinesis platform.§.§.§ TelegraphCQ Another early system is TelegraphCQ <cit.>. It originates from the Telegraph project, which began almost twenty years ago with the goal of developing an Adaptive Dataflow Architecture for supporting a wide variety of data-intensive, networked applications. It consists of an extensible set of composable dataflow modules or operators that produce and consume records in a manner analogous to the operators used in traditional database query engines, or the modules used in composable network routers. Query processing is performed by routing tuples through query modules. These modules are pipelined versions of standard relational database operators such as joins, selections, projections, grouping and aggregation, and duplicate elimination. Eddies are modules that adaptively decide how to route data to other query operators on a tuple-by-tuple basis. Each Eddy is responsible for the processing of tuples by a set of commutative query modules. Based on the current state of the system, an Eddy may dynamically decide on the order in which tuples are handled by each of the query modules. When one of the modules processes a tuple, it can generate other tuples and send them back to the Eddy for further routing. A tuple is sent to the Eddy's output if all the modules connected to the Eddy have successfully handled it.The glue that binds the various modules together to form a query plan is an inter-module communications API that is called Fjords. It allow pairs of modules to be connected by various types of queues. For example, a pull-queue is implemented using a blocking dequeue on the consumer side and a blocking enqueue on the producer side. A push-queue is implemented using non-blocking enqueue and dequeue; control is returned to the consumer when the queue is empty. This allows the system to efficiently deal with slow or unresponsive data sources, which would otherwise suspend the execution of the system. Dataflows are initiated by clients either via an ad hoc query language (a basic version of SQL) or by an equivalent scripting language for creating dataflow graphs.It supports much more general windows than the landmark and sliding windows described above. This is done using a for-loop construct to declare the sequence of windows over which the user desires the answers to the query: a variable t moves over the timeline as the for-loop iterates, and the left and right ends (inclusive) of each window in the sequence, and the stopping condition for the query can be defined with respect to this variable t. The syntax of the for-loop is as follows:For example, here is an example of the landmark query mentioned in the previous section, expressed in TelegraphCQ's query language:The authors of TelegraphCQ explicitly state that such a for-loop “is intended as a powerful, low-level mechanism rather than a user-level query language construct” <cit.>. Unfortunately the description of a user-level equivalent of this loop is not discussed.TelegraphCQ was implemented in C/C++, by reusing a good amount of code from the existing PostgreSQL relational database engine and adapting it for continuous queries.§.§.§ SASEThis system <cit.> was brought as a solution to meet the needs of a range of RFID-enabled monitoring applications. In contrast with the window and join queries that were the focus of Aurora, Borealis and TelegraphCQ, SASE rather deals with pattern queries, which describe a sequence of events that occur in temporal order and are further correlated based on their attribute values. A pattern query looks like this:Theclause describes the pattern of events to be observed; theclause further expresses conditions on the events' attributes for the pattern to be considered. Since the events relevant to the pattern are not necessarily in contiguous positions in the input stream, this clause can also specify an event selection strategy. For example, the “skip till next match” strategy specifies that in the pattern matching process, irrelevant events are skipped until an event matching the next pattern component is encountered. If multiple events in the stream can match the next pattern component, only the first of them is considered. Finally, theclause restricts the pattern to a time period, while theclause selects the events to be included in the pattern match. SASE deals with the particular problem of uncertain timestamps affixed to incoming events. An event in SASE's model has the following format: (type, id, [lower, upper], attributes), where type specifies the attributes allowed in the events of this type and id is the unique event identifier. For example, a_1=(A, 1, [5, 9], (v_1, v_2, v_3)) represents an event of type A, with an id 1, an uncertainty interval from time 5 to time 9, and three required attributes named v_1, v_2 and v_3. The fact that timestamps are only known within some precision bounds obviously complicates the process of pattern matching. At every point t, the system collects each event e from the input whose uncertainty interval spans t, and injects to a new stream a point event that replaces e's uncertainty interval with a fixed timestamp t. This is possible under the hypothesis that if e_1 arrives before e_2, then with respect to the occurrence time, e_1 either completely precedes e_2 or overlaps with e_2. Unfortunately, the version of the SASE system available at the time of this writing does not handle these imprecise timestamps. It does support the processing of pattern queries.§.§.§ CayugaCayuga is a complex event monitoring system for high speed data streams <cit.>. It provides a simple query language for composing stateful queries with a scalable query processing engine based on non-deterministic finite state automata with buffers. Each event stream has a fixed relational schema, and events in the stream are treated as relational tuples. Each event has two timestamps, a start time and a detection time, modeling the fact that events can have a non-zero but finite duration.A Cayuga query has three parts; theclause chooses the attributes to include in the output events, theclause describes a pattern of events that have to be matched, and thegives a name to the resulting output stream. For example, the following expression is a Cayuga query that creates an output event wheneverthere are at least ten input events whose summary attribute contains the word “iPod” within the same 24-hour interval:Note how this query itself involves two sub-queries. The FILTER{θ} operator selects events from the input stream that satisfy the predicate θ. The FOLD operator looks for patterns comprising two or more events; it defines the condition for the iteration, astopping condition for iteration, and a mapping between iteration steps. In this case, every event matching the FILTER condition will increment a variable called , until this value reaches 10.Each query is internally converted into a non-deterministic finite state automaton with buffers. Each vertex of the automaton is associated with a specific schema; a transition between two states P and Q is labelled by a triple ⟨ S, θ, f⟩, where S identifies an input stream, θ is a predicate over the joint schemas of P and S, and f is a function mapping these schemas to the schema of Q. The details of the transformation are given in <cit.>.§.§.§ Siddhi Siddhi is the query engine used in the WSO2 Complex Event Processor <cit.>, an open source engine for real-time analytics of events. It supports classical event detection patterns like filters, windows, joins and event patterns and more advanced features like event partitions, mapping database values to events, etc.Siddhi represents events using a tuple data structure. Its architecture consists of processors connected through event queues. Incoming events are placed in event queues, and processors listening to those event queues process those events and place any matching events into output queues of that processor, which will then be processed by other processors or send to end users as event notifications. As we shall see later, our proposed system follows a similar high-level design.It differs, however, in how processors execute their computations. Each processor is composed of several executors that express the query conditions; each executor processes the incoming events and produces a Boolean output indicating whether the event has matched. Non-matching events are discarded, and matching events are processed by logical executors downstream. Communication between processors is done through a “publish-subscribe” mechanism, with downstream processors registering to receive events produced from upstream processors.When a processor is connected to multiple input streams, Siddhi employs an original model to handle the incoming events. It uses a single input event queue and multiplexes all the events together. This is done in order to reduce the complexity of checking all input queues and keeping track of which events are yet to be processed. Each event is affixed with the ID of the stream it belongs to, making it possible for the processor to make sense of all mixed events and process them correctly.In terms of query capabilities, Siddhi supports the computation of typical aggregation functions (e.g. sum, average) over windows. Moreover, it can also express sequential patterns of events, similar to SASE's, but using a different syntax. The following query shows an example of such a pattern:It relates two eventsand , such thatmust followand the accountNumber attribute of both must be identical. When such a pattern occurs, the query produces an output event containing the symbol and account number identifying this pattern.Contrarily to many CEP engines, Siddhi tries to bring in stream processing aspects like multi-threading and pipelining, although these aspects do not seem document in research papers.§.§.§ Esper Esper is probably the most complete and versatile of the CEP engines included in this review. First, Esper's events may contain rich and nested domain-specific information. In particular, an event's property may itself be composed of other events; Esper uses the term fragment for such event pieces. Each portion of a query is also associated with a context; a context takes a cloud of events and classifies them into one or more sets, called context partitions. An event processing operation that is associated with a context operates on each of these context partitions independently.Esper's query language (EQL) is an extension of SQL that supports windows and patterns over streams.A pattern may appear anywhere in the from clause of an EPL statement including joins and subqueries. There are four types of pattern operators: * Operators that control pattern sub-expression repetition: , ,and * Boolean connectives* A single “followed-by” temporal operator that operates on event order* Where-conditions that control the lifecycle of sub-expressions (such as ). For example, the following query, taken from Esper's documentation, selects a total price per customer over pairs of events (a ServiceOrder followed by a ProductOrder event for the same customer id within one minute), occurring in the last two hours, in which the sum of price is greater than 100, and using a where clause to filter on the customer's name:The commercial product Oracle CEP uses Esper as its internal query engine.§.§.§ The Apache Ecosystem We now move our focus distributed event processing frameworks. A first observation that can be made from these systems is that they generally focus on the routing and load balancing of event streams across a multi-machine infrastructure.In counterpart, we shall see that they offer much fewer functionalities for the actual processing of these streams, which is often left to the user as procedural (i.e. Java or Python) code. Due to their distributed nature, they also involve a much more complex setup than the solutions detailed so far.The Apache Foundation hosts several (sometimes competing) projects related to the processing of events. Apache Samza is a distributed stream processing framework <cit.>. It provides a very simple callback-based “process message” API comparable to MapReduce. As such, it is more an environment in which jobs can be deployed, coordinated and dispatched across multiple machines, than a system providing facilities to actually perform these computations. Each separate “job” in Samza still has to be written in low-level code, such as Java or Scala.It is reasonable to think, however, that many other CEP engines mentioned above could operate within a Samza infrastructure at the job level, making these two kinds of systems complementary.Closer to our topic is Apache S4, a platform for massive scale processing of data streams using the actor model <cit.>. It is, however, unable to express queries that span multiple events, which hardly qualifies it as a CEP engine. Apache Spark <cit.> is yet another distributed batch processing platform, similar to Hadoop: its core provides memory management and fault recovery, scheduling, distributing and monitoring jobs on a cluster, and functionalities for interacting with storage systems. The main data structure in Spark is a distributed set of objects called the Resilient Distributed Dataset (RDD). No specific type is imposed on the contents of an RDD. Operations on RDDs include transformations, which take an RDD as their input and produce another RDD as their output; examples of transformations include map, join, filter, etc. The second type of operation is actions that run a computation on an RDD and return a value; examples of actions include counting and aggregation functions. Transformations in Spark are said to be “lazy”, in the sense that the input data for a transformation or an action is not computed until the output of that transformation is requested.Spark provides a few relatively low-level constructs for processing RDDs; similarly to S4 and Samza, it focuses on the distributed dispatching of jobs. It can be completed with extensions that provide more elaborate facilities for expressing computations on events. One of them is SparkSQL, which allows querying RDDs as if they are relational tables, using SQL. Of interest to this paper is Spark Streaming, an API that allows Spark to handle streams of data.For example, here is a Scala code example that computes a sliding window average over a stream:[language=scala] val inputsStream = ssc.socketStream(...) val windowStream1 = inputStream.window(Seconds(4)) val w = Window.partitionBy("id").orderBy("cykle").rowsBetween(-2, 2) val x = windowStream1.select($"id",$"cykle",avg($"value").over(w)) While Spark provides out-of-the-box functionalities for computing windows, aggregate functions, filters and map-reduce jobs, it seems to lack similar constructs for handling sequential patterns, such as those considered by SASE, Siddhi and Esper.Storm <cit.> is another distributed processing platform supported by the Apache Foundation. In 2014, it earned the title of the fastest open source engine for sorting a petabyte of data in the 100 TB Daytona GraySort contest. In Storm, events are immutable sets of key-value pairs called tuples, while a stream is a potentially infinite sequence of tuples. Each stream is given an ID (manually set by the user), and is associated to a fixed schema for the tuples it contains. “Bolts” are units that take input streams and produce output streams, with initial tuple sources being called “spouts”. Distributed computation is achieved by configuring Storm so that multiple instances of the same bolt can be run, each on a different fragment of an input stream. This splitting and merging of streams is configured manually by the user, although libraries like Trident can simplify their management <cit.>. Trident also provides higher-level objects, such as functions. Functions have a special semantics, similar to a form of tuple join: the fields of the output tuple they produce are appended to the original input tuple in the stream. If the function emits no tuples, the original input tuple is filtered out. Otherwise, the input tuple is duplicated for each output tuple. class HashTagNormalizer extends BaseFunctionpublic void execute(TridentTuple tuple, TridentCollector col)String s = tuple.getStringByField("foo"); s = s.trim(); col.emit(new Values(s)); Additional Trident constructs include filters, which take in a tuple as input and decide whether or not to keep that tuple or not; map, which returns a stream consisting of the result of applying the given mapping function to the tuples of the stream; min/max which returns the minimum (resp. maximum) value on each partition of a batch of tuples; and a number of classical windowing and aggregation functions. All these infrastructures provide a relatively low-level API for manipulating events; besides, apart from SparkSQL (which only works for relational queries on static RDDs), none of these systems provides an actual query language that would abstract implementation concerns.§.§.§ Other Systems Due to space considerations, several other systems have to be left out of this presentation, such as Cordies <cit.>, Flink <cit.>, LogCEP <cit.>, and SPA <cit.>. They all provide functionalities similar in nature to that of one of the tools described above. Other early works on stream databases include <cit.>. Also outside of this review are systems peripheral to the actual processing of event streams, such as “event brokers” like Apache Kafka, Flume, Twitter, ZeroMQ, etc. There also exist dozens of commercial tools claiming CEP features with widely varying levels of detail, and for which it is hard to provide information without detailed documentation and a freely available implementation. We only mention in passing Amazon Kinesis, StreamBase SQL <cit.>, StreamInsight <cit.>, and SAS Event Stream Processing Studio. For the sake of completion, we finally mention log analysis systems that provide very simple, Grep-like filtering capabilities, such as EventLog Analyzer[<www.manageengine.com/EventLogAnalyzer>] and Lumberjack[<https://fedorahosted.org/lumberjack/>]. §.§ Tools for Runtime Verification Perhaps lesser known to mainstream CEP users is the existence of another field of research, called Runtime Verification (RV). In RV, a monitor is given a formal specification of some desirable property that a stream of events should fulfill. The monitor is then fed events, either directly from the execution of some instrumented system or by reading a pre-recorded file, and is responsible for providing a verdict, as to whether the trace satisfies or violates the property.Classical RV problems are centered around properties that deal with the way events are ordered. For example, the canonical “HasNext” property stipulates that, whenever an Iterator object is used in a program, any call to itsmethod must be preceded by a call tothat returns the value . Consequently, the languages used by monitors to specify properties all have a strong temporal or sequential component: this includes finite-state automata, temporal logic, μ-calculus, and multiple variations thereofThere are clear ties between CEP and RV, which have been revealed in a recent paper <cit.>. Both fields consider sequences of events, which must be processed to provide some output. In both cases, the analysis is generally done in a streaming fashion. Despite these similarities, contributions from one community have been largely overlooked by the other, and vice versa. In the following, we describe a few popular RV systems developed over the years, and present them in the light of their event processing capabilities.§.§.§ LOLA LOLA is a specification language and a set of algorithms for the online and offline monitoring of synchronous systems including circuits and embedded systems <cit.>. It resembles synchronous programming languages such as LUSTRE <cit.>,but provides additional functionalities for expressing properties about the future.A LOLA specification is a set of equations over typed stream variables. Figure <ref> shows an example of a LOLA specification, summarizing most of the language's features. It defines ten streams, based on three independent variables t_1, t_2 and t_3. A stream expression may involve the value of a previously defined stream. The values of the streams corresponding to s_3 to s_6 are obtained by evaluating their defining expressions place-wise at each position. The expression “ite” represents an if-then-else construct: the value returned depends on whether the first operand evaluates to true. The stream corresponding to s_7 is obtained by taking at each position i the value of the stream corresponding to t_1 at position i+1, except at the last position, which assumes the default value false.The specification can also declare certain output Boolean variables as triggers. Triggers generate notifications at instants when their corresponding values become true. Hence, the property “the number of events where b holds never exceeds the number of events where a holds” can be written in LOLA as: s = s[-1; 0] + ((a ∧ b); 1; 0) + ((b ∧ a); -1; 0)(s ≤ 0) A LOLA specification is said to be efficiently monitorable its worst case memory requirement is constant in the size of the trace. The introductory paper on LOLA shows how these bounds can be computed. Many features of CEP can be accommodated by this notation, such as windows and simple aggregations. However, basic LOLA has only primitive support for event data manipulation.§.§.§ MOPThe Monitoring Oriented Programming (MOP) project <cit.> is a programming infrastructure where monitors are automatically synthesized from properties and integrated into the original system to check its behaviour at runtime.JavaMOP is an instance of MOP targeted at Java programs <cit.>; it relies on concepts of Aspect-Oriented Programming (AOP) <cit.> to fetch relevant events from the execution of a system. In JavaMOP, an event corresponds to a pointcut, such as a function call, a function return, the creation of an object, the assignment of a field, etc. The behaviour of the system is expressed as properties over these events. JavaMOP supports several formalisms for expressing these properties: ERE (extended regular expressions), FSM (finite state machines), CFG (context free grammars), PTLTL (past time linear temporal logic), FTLTL (future time linear temporal logic), and ptCaRet (past time linear temporal logic with calls and returns). HasNext(Iterator i) event hasnexttrue after(Iterator i) returning(boolean b) :call(* Iterator.hasNext()) target(i)condition(b)event hasnextfalse after(Iterator i) returning(boolean b) :call(* Iterator.hasNext()) target(i)condition(!b)event next before(Iterator i) :call(* Iterator.next()) target(i) ltl: [](next => (*) hasnexttrue)@violationSystem.out.println("ltl violated!");Figure <ref> shows an example of a JavaMOP specification using Linear Temporal Logic. Three atomic events (,and ) are created from pointcuts corresponding to method calls on objects of type . An LTL specification then stipulates that everyevent must be preceded by . Thesection can contain arbitrary Java code that is to be executed when the specification becomes violated. The whole specification is enclosed in a declaration that is parameterized by i; this is an example of a technique called parametric slicing. Concretely, one monitor instance will be created for each iterator manipulated by the program; JavaMOP takes care of dispatching the relevant events to the appropriate monitor instances.Given a specification in any of the supported languages, JavaMOP transforms it into an optimized AspectJ code for a monitor, which is program-independent. This AspectJ code can then be weaved with any Java program; the end result is that the monitor will check that the program conforms with specification at runtime.§.§.§ LARVA LARVA is another runtime monitoring architecture specifically aimed at the verification of Java programs <cit.>. LARVA uses as its main specification language a dynamic form of communicating automata with timers and events, called DATEs. In this context, events can be visible system actions (such as method calls or exception handling), timer events, channel synchronization (through which different automata may synchronize) or a combination of these elements.Figure <ref> shows an example of DATE, for a property that monitors bad logins occurring in a system. Each transition is guarded by conditions on the input event (such as the event's name); optionally, a transition may also update internal variables specific to each automaton instance, such as counters.Of interest in DATEs is the possibility to define timeout events; the “” guard on the leftmost transition indicates that this transition is to be taken automatically, if no other transition has fired in the past 30 minutes. Although timeouts and clocks have been used in model checking software such as UPPAAL <cit.>, LARVA is one of the only runtime (i.e. streaming) tools supporting them.DATEs are actually input into the LARVA system using a textual representation of the automaton. Such a representation allows a DATE to be nested within aconstruct, which allows multiple instances of an automaton to be created for each value of the specified parameter encountered during the execution of the program. Recently, a tool called LarvaStat has also extended LARVA with the possibility of computing statistics on the execution of the program <cit.>. These statistics are exposed as “statistical events”, and properties can be expressed in terms of these statistics.§.§.§ MarQ MarQ <cit.> is a runtime monitoring tool that deals with parametric properties, in which events may carry data. A parametric event is a pair of an event name and a list of data values (such as shown in Figure <ref>), and a parametric trace is a finite sequence of parametric events. A parametric property denotes a set of parametric traces, in the same way that a regular expression describes a set of symbol sequences.Quantified event automata (QEA) is a notation for describing parametric properties. Figure <ref> shows an example of such an automaton, corresponding to the property that a user must withdraw less than $10,000 in a 28-day period <cit.>. It highlights the various features of QEAs. First, an automaton can be parameterized by universal and existential quantifiers. These quantifiers will create as many slices from the original trace, corresponding to the possible variable bindings encountered along the stream. Each QEA instance also has internal variables; guards on transitions of the automaton can refer to the values of the current event, and also of the current values of these internal variables. Moreover, these variables can be updated when a transition is taken.One advantage of QEAs is the fact that they admit an efficient monitoring algorithm via an approach called parametric trace slicing <cit.>. In the Runtime Verification community, MarQ has consistently fared among the fastest runtime monitors available, such as at the latest Competition on Runtime Verification <cit.>.§.§.§ LogFire The LogFire system was developed for verifying execution traces against formal specifications. Instead of automata, like in LARVA and MarQ, it uses a different formalism for specifying the expected behaviour of a system, based on the concept of rules <cit.>. In this respect, it shares similarities with a popular rule engine called Drools[<http://www.jboss.org/drools>] and Jess[<http://herzberg.ca.sandia.gov>]. Basic events follow the same structure as in MarQ; these events, along with additional facts, can be written to a dynamic structure called a fact memory. LogFire implements a domain-specific language (DSL) based on the Scala programming language, to allow the expression of rules that correlate these events and facts. Each rule has the form:–_1 ∧…∧_n |-> A rule is defined by a name, a left hand side consisting of a conjunction of conditions, and a right hand side consisting of an action to be executed if all the conditions match the fact memory. An action can be adding facts, deleting facts, or generally be any Scala code to be executed when a match for the left-hand side is found. For example, Figure <ref> shows a simple Scala block of code for aobject that checks the property: a resource can only be granted to one task (once) at any point in time, and must eventually be released by that task. Rule , for example, is fired when the current event's name is “grant” with parameters t and r, and there is no fact in the memory calledwith the same parameters t and r. If such is the case, the rule fires, and its action consists of adding a new fact (t,r) in the fact memory.[language=scala] class ResourceProperties extends Monitorval grant, release , end = event val Granted = fact "r1" – grant('t, 'r) not(Granted('t, 'r)) |-> Granted('t, 'r) "r2" – Granted('t, 'r) release('t, 'r) |-> remove(Granted) "r3" – Granted('t, 'r) grant(' , 'r) |-> fail("double grant") "r4" – Granted('t, 'r) end() |-> fail("missing release") "r5" – release('t,'r) not(Granted('t,'r))|-> fail("bad release")The readability of the rules is enhanced by a “trick”: each rule definition in the monitor is actually an implicit chain of method calls which gets executed when the class is first instantiated. To this end, theclass declares implicit functions; these functions are applied by the Scala compiler in cases where type checking of an instruction fails but where it succeeds if one such (unique) implicit function can be applied. One implicit function is called ; it takes a string as an argument and returns an object of a class. This object, in turn, defines a function called , which takes a condition, and returns another object, this time defining a method called , and so on. Hence, once implicit functions are inserted by the compiler, each rule actually becomes a plain Scala statement that instantiates objects and calls their methods.To determine what rules may fire upon an incoming event, LogFire implements a pattern matching algorithm called Rete <cit.>. The DSL allows domain specific constructs to be mixed with Scala code, making the notation very expressive and convenient for practical purposes. When an error is detected, the system produces an error trace illustrating what events caused what rules to fire, allowing the user to understand the cause of the violation.Another system, T-REX, uses a rule-based language called Tesla <cit.>. Instead of a Rete-based algorithm, Tesla rules are evaluated through a conversion into finite-state automata.§.§.§ MonPoly MonPoly is another tool for the evaluation of logical properties over event logs <cit.>. Its specification language is called Metric First-Order Temporal Logic (MFOTL), and is an extension of Linear Temporal Logic with predicates and first-order quantifiers.In MonPoly, each event is viewed as a mini “database” that can be queried by means of predicates. For example, an expression like (u;a) is true if the current event represents a withdrawal made by user u for amount a. In addition to Boolean connectives, basic assertions can also include temporal operators. The “globally” modality, noted φ, signifies that φ must hold for every suffix of the trace starting at the current point. The “eventually” modality, noted ◊φ, stipulates that φ must hold for some suffix of the trace starting at the current point. These two modalities also have their past equivalents, represented by black symbols. Temporal operators can also be parameterized by an interval; for example, ◊_[a,b]φ says that φ must hold at some point between a and b time units from the current moment.Special care has been taken in MFOTL to handle aggregate functions over multiple events. An expression of the form [ω_t z.ψ](y;g) is called an aggregation formula. Here, g is a list of attributes on which grouping is performed, t is the term on which the aggregation operator ω is applied, and y is the attribute that stores the result.Supported aggregation operators include sum, average, minimum and maximum. Finally, MFOTL also supports first-order quantifiers ∀ and ∃, which are interpreted in the standard way.This makes it possible to express rich properties involving both sequential patterns and aggregation functions. For example, the following MFOTL property checks that for each user, the number of withdrawal peaks in the last 31 days does not exceed a threshold of five, where a withdrawal peak is a value at least twice the average over the last 31 days: ∀ u: ∀ c: [_jv; p; κ: [_aa; τ.⧫_[0;31) (u; a) ∧(τ)](v; u) ∧ ⧫_[0; 31)(u; p) ∧(κ) ∧ 2 ·∨≺ p](c; u) → c ≼ 5 Experimental evaluation of an implementation of MonPoly revealed that MFOTL queries are easier to maintain than their equivalent (and significantly longer) MySQL queries, and that the runtime performance is in the same order of magnitude than the STREAM system.§.§.§ Other systemsOther runtime monitors and log analysis tools developed in the past include J-Lo <cit.>, Mufin <cit.>, PoET <cit.>, PQL <cit.>, PTQL <cit.>, RuleR <cit.>, SEQ.OPEN <cit.>, SpoX <cit.>, and Tracematches <cit.>. Their specification languages can be related to one of the aforementioned systems. § DESIDERATA FOR A STREAM QUERY ENGINEThe previous section has given a broad picture of the event processing landscape. We now make a few observations on the relative strengths and weaknesses of these solutions. Many of them will becom design goals warranting the development of a new and (hopefully) complementary event processing system.In the realm of relational databases, desirable properties of a potential query language have been collated into a document called the Third Manifesto (3M) <cit.>. In the following, we list a number of observations and choices that should be taken into account when designing a query language for ESP. These design choices will be reflected in the implementation of our event query engine, BeepBeep, and its companion query language, . §.§ No Unique Event Type All CEP tools, with exception of Esper,assume events to be tuples. In relational databases, the 3M (prescriptions 6–10) also enforces this rigid data model. In such a case every tuple of a trace must have the same fixed set of attributes, and events must differ only in the values they define for each attribute. Moreover, these values must be scalar. A query can transform an input trace into a different output, but the resulting trace will still be made of tuples, with possibly different attributes. RV tools have slightly more diverse events. At one extreme, JavaMOP events are atomic symbols, but at the other, MonPoly events are mini-databases that can be queried with predicates. Most other tools lie in between, and assume an event structure that can be mapped to lines of a CSV file (i.e. a form of tuple).Yet, we have seen in Section <ref> how the tuple datatype is not appropriate for all possible queries. This is especially true of the use case of Section <ref>, where events produced by the running systems have a nested data structure where the same element names can occur multiple times. This issue has been raised in the Competition on Runtime Verification <cit.>: to translate these events into flat tuples, the organizers had to introduce an event per character object, with the other metadata being copied between these new events. They report that flattening the events in such a way led to more complex specifications that needed to deal with the arbitrary ordering of events that should be observed at the same point. Query <ref> is even further away from the tuple model. A truly generic event processing system should not presuppose that any single type of events is appropriate for all problems. Rather, each type of event should come with its own set of event manipulation functions (EMF) to extract data, manipulate and create new events of that type. These functions should be distinct from stream manipulation functions (SMF), which, in contrast, should make very limited assumptions on the traces they manipulate. This clear separation of EMF and SMF should make it possible to easily mix events of different types into queries.It should also help avoid the “square peg in a round hole” problem, where one must write an overly complicated expression simply to work around the limitations of the single available event type. §.§ Modularity and Composition A similar problem also arises with respect to the specification (or query) language of each tool. First, some tools (such as Apache Storm) have no query language: computations can only be achieved through code. The database foundations of ESP have led many solutions to compute everything through a tentacularstatement, with optional constructs attempting to account for every possible use case.A modular event processing framework should alleviate this problem by proposing a set of basic processing units that can be freely composed. Therefore, rather than proposing a single, all-encompassing query language, it should accommodate multiple query languages, along with lightweight syntactical “glue” to allow for their composition. Hence every step of the computation to be expressed in the notation most appropriate for it. Moreover, such a framework should provide, at the implementation level, easy means for extending it. This means both allowing the user to define new processing units, and also new ways for the language to accommodate them. In this respect, existing systems do not fare very well. With the exception of MOP, which lets users define new plugins, RV tools have a completely fixed specification language that cannot be extended. CEP languages sometimes allow the user to define new function symbols, but these new symbols can only be invoked in the traditional notation “function(arguments)”. §.§ Relational Transparency A recurring problem with RV systems is that their specification language is seen as repulsive by potential end users. In contrast, virtually every CEP system touts an “SQL-like” query language, which has the reassuring effect of tapping into concepts that practitioners already know. Unfortunately, while they indeed borrow keywords from the SQL language, their syntax is almost invariably incompatible with SQL. For example, in the Cayuga language <cit.>, selecting all events where attributeis greater than 10 is written as:* {cnt > 10}(webfeeds)and in Siddhi as *webfeeds(cnt > 10) while extracting the same data from a database would be written as the following SQL query:*webfeedscnt > 10 Even SQLstream's syntax is not compatible, as it distinguishes between querying a stream and querying a table; the former requires the addition of thekeyword. The only exception is Esper, whose basicstatement is identical to SQL's.When the context allows an event trace to be interpreted as an ordered relation whose events are tuples, then the SQL query computing some result over that relation should be a valid event stream query as well; we call this concept relational transparency. Conversely, this means that standard relational tables should be able to be used as drop-in replacements for event traces anywhere in an expression where tuples are expected. This statement, in line with 3M's “Very strong suggestion” #9, is in itself a distinguishing point to virtually every other ESP system around. §.§ Circumscribed Procedural Escapes All event processing should be made through the combination of relatively high-level, declarative language constructs, without resorting to procedural statements or traditional code. A counter-example would a TelegraphCQ expression like this one:One can see that part of its processing is done through the use of a C-styleloop. There are many reasons why such an approach is undesirable. Besides being inelegant, it pollutes the declarative style of SQL with procedural statements which arguably should not occur in a query language. This, in turn, makes the semantics of the language very hard to define, and the actual meaning of a query difficult to grasp. There is also an implicit coupling between the value 5 that increments the loop counter, and the value 4 subtracted in thesegment.A similar remark applies to many other query languages. For example, here is a statement taken from a LINQ tutorial, which intermingles query keywords with C# :As a matter of fact, a LINQ query cannot live outside a C# interpreter as a stand-alone statement. In contrast, we expect an event stream query language to be fully declarative. Syntactically, this entails that no procedural constructs (if-then blocks, loops, variables) should be offered to the user. This point of view is actually stricter than SQL, as most SQL engines extend the language with such constructs. This also contradicts 3M's prescription 5, (which requires the presence of if-then blocks. This does not mean that the resulting system should not support user extensions. However, it should support them in a way that whatever procedural code that needs to be written can then be accessed through extensions to the query language's syntax, thereby creating a Domain-Specific Language (DSL). While new processing units are made of (potentially Turing-complete) code, users should not have the possibility of writing procedural code inside their queries, thus preserving their declarative nature.§.§ Increased Expressiveness In terms of the expressiveness of their respective input languages, RV and CEP systems have complementary strengths. Compared to RV, CEP tools are far less advanced in terms of evaluating sequential patterns of events. In many of their input languages, the only way of correlating an event with past or future events is through aof the trace with itself —an expensive operation, which can only be done in restricted ways (such as by bounding the window of events that are joined). Intricate sequential relationships, such as those easily expressible with a finite-state machine notation common to many monitoring systems, are very hard to state in existing CEP tools. In a few cases, a language offers the possibility to describe primitive sequencing patterns (using a form of regular expression, or simple “A follows B” instructions). These patterns are very restricted in their use (for example, they do not allow negation) and, as empirical testing will reveal, costly to evaluate. Some systems like Cayuga transform their queries internally into finite-state machines, but their query language does not allow a user to directly specify FSMs. It shall also be noted that most CEP tools disallow queries that necessitate an unbounded number of future events to compute their result.This is in sharp contrast with RV systems, where the sequential aspect of event traces is central. Since the specification language of monitors is based on logic, it is also natural to find a form of first-order quantification in many of them. This quantification occurs in problems where some specific pattern must hold “for all elements”. A few CEP systems allow a trace to be split into various slices, but as a rule, no true equivalent of universal and existential quantification is supported.In counterpart, CEP calculate the result of a query on a trace of events, and the output of that query can itself be a sequence of events with data-rich contents, which can be reused as the input of another query. In contrast, a monitor evaluates a property over a trace. Intermediate results of its computation are seldom exposed or expected to be consumable, and its output (most often a single truth value) is not reusable as the input of another monitor for further processing. There do exist monitors whose specification language involves more advanced data computing capabilities (numerical aggregation functions, mostly), but they still compute the answer to what is fundamentally a yes/no question.As a consequence of the previous observation, it can be noted that CEP problems feature data-rich events, over which complex transformations and computations can made. Such functionalities are considered standard for a CEP language. Indeed, theconstruct provided by most CEP engines makes it possible to produce output tuples made of attributes from multiple input tuples, coming from potentially different input traces, combine them and apply various built-in functions (mostly numerical). In contrast, most monitors do support events with data fields, but only allow basic (again, Boolean) comparisons (=, ≤, etc.) between values of these fields. The handling of aggregation functions and other forms of computation over event data is not a common feature in RV, and only a handful of monitors so far support them <cit.>. Obviously, one should aim for the best of both worlds, with a system allowing the expression of rich data manipulation operations, rich pattern specifications, and more. §.§ Limiting Boilerplate Code and Configuration Many of the systems mentioned earlier, and in particular distributed CEP systems, require high amounts of setup and preparation before running even the smallest example. The “Hello World” example for Apache S4 requires setting up a cluster, editing half-a-dozen source and configuration files, and typing about as many arcane commands at the command line; the whole example requires six pages of documentation that one can hardly describe as user-friendly <cit.>. Even when a CEP system is a reasonably stand-alone application, running a query on a simple input stream may still require non-trivial amounts of boilerplate code. Figure <ref> shows an example of this situation. It displays the minimal Java code for reading tuples from a CSV file, running an Esper query that computes the sum of attributes a and b and printing the output events from that query one by one. Several observations can be made from this excerpt. First, about a dozen statements are required to instantiate all the required objects: a , a , instances of , , , and finally a user-definedto catch the output of the query. As is the case in other tools such as Siddhi, some of these objects must be passed to others, initialized, started, shutdown, reset, etc.[Figure <ref> shows the same code using BeepBeep's JDBC interface.]Second, Esper does not provide a generic “tuple” event; an event type (here class ) must be explicitly created for each tuple type —a tuple with different attributes would require yet another class declaration. Moreover, each field of the tuple must have a public getter method, and Esper even imposes the name it should have: the value of a field calledmust be accessed through a method called . Besides being cumbersome, this also goes against our first design requirement, as events cannot be arbitrary objects. For example, a trace of numbers cannot use Java'sclass for its type; because of the above conventions, the number would have to be encapsulated in a user-definedclass to be processed by Esper. Otherwise, events can be queried as JavaBeans, but this again imposes restrictions on their structure; for example, a primitive type is still not a JavaBean object. As a side note, the system also does not provide means to read events from a source; lines 10–12 and 16 must take care of this manually by reading the lines of the file, and lines 24–26 further break each text line to extract the attributes they contain.§.§ Balancing Throughput and Other Considerations Most event stream processing systems emphasize their performance first and foremost. Virtually all commercial-grade event stream engines available contain the words “fast” or “high throughput” on their web sites and in their documentation.Recently, the term “Fast Data” has even emerged as the hyped successor of the “Big Data” trend <cit.>.There is without question a market for high-load, high-throughput solutions such as those described earlier. However, one of they key motivations of the present work is to put primary focus on the definition of a simple and versatile formal semantics for an event processing system, followed by the design of a readable and fully declarative query language. Performance considerations are relegated to third place; in particular, our system should not gain performance at the price of readability or simplicity, or succumb to premature optimization: “first do it right, then do it fast”.Case in point, in some of the use cases described in Section <ref>, the challenge is not high event load. A NIALM system generates readings at the same frequency as the power grid, i.e. 60 Hz; the Pingus video game produces one event at each cycle of its game loop, which is approximately 150 Hz. Such a throughput can easily be processed with custom shell scripts. What one gains from using an event stream engine, rather than these scripts, is ease of use, flexibility, and maintainability of the queries —not computation speed. In the same way, an Excel spreadsheet is not preferred by users because it is faster than a pocket calculator on raw arithmetical calculations, but because it eases the expression, maintenance and presentation of these calculations.This standpoint is a minority voice in the current heavy trend focusing on performance.This, however, is not to be taken as an excuse for the bad performance of our engine. As a matter of fact, our empirical analysis at the end of this paper shows that for some classes of queries, our proposed tool has a performance commensurate with other CEP systems. § COMPUTATIONAL FRAMEWORKThe observations made in the previous section motivated the design of BeepBeep 3, a new event stream processing engine that aims to reconcile RV and CEP by supporting functionalities of both. As its name implies, it is the third incarnation of the BeepBeep line of monitoring software. Earlier versions of BeepBeep used a first-order extension of Linear Temporal Logic as their specification language. BeepBeep was designed with the goal of borrowing and improving on concepts found in a variety of other software. It fragments the processing of traces into pipelined computational units that generalize Trident's functions, Aurora's boxes and Siddhi's processors. It supports both a “push” and “pull” semantics that resembles SQLstream's pumps. Similar to Esper, its events can be objects of arbitrary types. Extensions to BeepBeep's core can handle finite-state machines like MarQ's, and a form of first-order temporal logic akin to MonPoly's. It provides yet another SQL-like query language, but which maintains backwards compatibility with SQL and can easily be extended by user-defined grammatical constructs.BeepBeep can be used either as a Java library embedded in another application's source code, or as a stand-alone query interpreter running from the command-line. Versions of BeepBeep 3 are publicly available for download, and its code is released under an open source license.[<https://liflab.github.io/beepbeep-3>] Thanks to the simplicity of its formal foundations, the core of BeepBeep 3 is implemented using slightly less than 10,000 lines of Java code.In this section, we describe the formal foundations of BeepBeep's computational model. In this model, the evaluation of a query is performed synchronously in discrete steps by computation units called processors. §.§ Events, Functions and Processors Let T be an arbitrary set of elements. An event trace of type T is a sequence e = e_0 e_1 … where e_i ∈T for all i. The set of all traces of type T is denoted T^*. In the following, event types are written in double strike (e.g. T, U, …) and can refer to any set of elements.In line with the observations made previously, BeepBeep makes no assumption whatsoever as to what an event can be. Event types can be as simple as single characters or numbers, or as complex as matrices, XML documents, plots, logical predicates, polynomials or any other user-defined data structure. In terms of implementation, an event can potentially be any descendent of Java'sclass.A function is an object that takes zero or more events as its input, and produces zero or more events as its output. The arity of a function is the number of input arguments and output values they have. Borrowing terminology from the theory of relations <cit.>, a function accepting one event as input will be called monadic (or unary), while one accepting two events will be called dyadic (or binary), and so on. Functions accepting no input are called medadic, or more simply constants. Since functions both have input and output, they must be qualified according to both —one may hence talk about a dyadic-input, monadic-output function, or more succinctly a 2:1 function.For example, the addition function + : ℝ^2 →ℝ is the 2:1 function that receives two real numbers as its input, and returns their sum as its output. While functions with an output arity of 2 or more are rare, they do occur in some situations; for example, one can imagine the function f : ℂ→ℝ^2 which, given a complex number, returns both its real and imaginary parts as two distinct outputs. In BeepBeep, functions are first-class objects; they all descend from an abstract ancestor named , which declares a method calledso that outputs can be produced froma given array of inputs.A processor is an object that takes zero or more event traces, and produces zero or more event traces as its output. The difference between a function and a processor is important. While a function is stateless, and operates on individual events,a processor is a stateful device: for a given input, its output may depend on events received in the past. Processors in BeepBeep all descend from the abstract class , which provides a few common functionalities, such as obtaining a reference to the n-th input or output, getting the type of the n-th input or output, etc. Processors are similar in their nature to other concepts in CEP systems, such as “bolts” in Apache Storm, or to the similarly-named objects in Siddhi.We shall use a formal notation that defines the output trace(s) of a processor in terms of its input trace(s). Let e_1, …, e_n be n input traces, and φ be a processor. The expression e_1, …, e_nφ will denote the output trace produced by φ, given these input traces. As a simple example, let us consider a processor, noted n, that outputs every n-th event of its input and discards the others (this process is called decimation). This can be defined as: e n_i ≡ e_niThe expression states that the i-th event of the output stream is the (n × i)-th event of the input stream.Each processor instance is also associated with a context. A context is a persistent and modifiable map that associates names to arbitrary objects. When a processor is duplicated, its context is duplicated as well. If a processor requires the evaluation of a function, the current context of the processor is passed to the function. Hence the function's arguments may contain references to names of context elements, which are replaced with their concrete values before evaluation. Basic processors, such as those described in Section <ref>, do not use context. However, some special processors defined in extensions to BeepBeep's core (the Moore machine and the first-order quantifiers, among others) manipulate theirobject.For a given input event, a processor can produce any number of output events. For example, one can imagine a stuttering processor ψ_n that repeats each input event n times, and defined as follows:e ψ_n_i ≡ e_⌊i/n⌋§.§ Streaming, Piping and Buffering A processor produces its output in a streaming fashion. However, a processor can require more than one input event to create an output event, and hence may not always output something. This can be seen in the case of the decimation processor described above. Given a trace e_0 e_1, …, the processor outputs e_0 immediately after reading it. However, it does not produce any output after consuming e_1; it will only produce another output after having consumed n inputs.Processors can be composed (or “piped”) together, by letting the output of one processor be the input of another. Another important characteristic of BeepBeep is that this piping is possible as long as the type of the first processor's output matches the second processor's input type. The piping of processors can be represented graphically, as Figure <ref> illustrates. In this case, an input trace (of numbers) is duplicated into two copies; the first is sent as the first input of a 2:1 processor labelled “+”; the second is first sent to the decimation processor, whose output is connected to the second input of “+”. The end result is that output event i will contain the value e_i + e_ni.When a processor has an arity of 2 or more, the processing of its input is done synchronously. A front is a tuple of input events with matching positions in each input stream. A computation step will be performed if and only if a complete front is available, i.e. an event can be consumed from each input trace. This is a strong assumption; many other CEP engines allow events to be processed asynchronously, meaning that the output of a query may depend on what input trace produced an event first. One can easily imagine situations where synchronous processing is not appropriate. However, in use cases where it is suitable, assuming synchronous processing greatly simplifies the definition and implementation of processors. The output result is no longer sensitive to the order in which events arrive at each input, or to the time it takes for an upstream processor to compute an output.[The order of arrival of events from the same input trace, obviously, is preserved.] As a result, given the formal definition of each processor in a query, a “pen and paper” calculation will always yield the same result as the implementation.This hypothesis entails that processors must implicitly manage buffers to store input events until a complete front can be consumed. Consider the case of the processor chain illustrated in Figure <ref>. When e_0 is made available in the input trace, both the top and bottom branches output it immediately, and processor “+” can compute their sum right away. When e_1 is made available, the first input of “+” receives it immediately. However, the decimation processor produces no output for this event. Hence “+” cannot produce an output, and must keep e_1 in a queue associated to its first input. Events e_2, e_3, …will be accumulated into that queue, until event e_n is made available. This time, the decimation processor produces an output, and e_n arrives at the second input of “+”. Now that one event can be consumed from each input trace, the processor can produce the result (in this case, e_0 + e_n) and remove an event from both its input queues. Note that while the queue for the second input becomes empty again, the queue for the first input still contains e_2, … e_n. The process continues for the subsequent events, until e_2n, at which point “+” computes e_2 + e_2n, and so on. In this chain of processors, the size of the queue for the first input of “+” grows by one event except when i is a multiple of n.This buffering is implicit in the formal definition of processors, and is absent from the graphical representation of their piping. Nevertheless, the concrete implementation of a processor must take care of these buffers in order to produce the correct output. In BeepBeep, this is done with the abstract class ; descendents of this class simply need to implement a method named , which is called only when an event is ready to be consumed at each input. Examples will be given in Section <ref>.The reader can observe that many advanced features present in other event stream engines (such as handling out-of-order events, fault tolerance, order of arrival, clock synchronization, or validity intervals for events) are deliberately left out of this model. One may argue that this makes for a poor and unappealing system, in terms of the number of bleeding-edge research concepts it implements. This is counter-balanced by three factors. First, some of these features can be handled by the environment in which the system is running; this is particularly the case of fault tolerance (virtual machine infrastructures readily provide crash recovery) and synchronization (the Precision Time Protocol can timestamp with sub-microsecond accuracy across machines). Similarly, BeepBeep can easily be run within another CEP architecture, such as Apache Spark, and benefit from its reliability properties. These solutions are certainly far superior than any potential built-in replication of their functionalities within the engine. Second, there exist numerous use cases (such as the ones we presented in Section <ref>) where these features are simply not needed. For those use cases, a user actually benefits from a simpler computational model. Finally, we shall see that in counterpart, thanks to this simple model, BeepBeep implements many features that other CEP engines do not.§.§ “Pull” vs. “Push” Mode A first such feature allows events to be generated in two modes. In pull mode, the handling of events in the processor pipe is triggered by requesting for a new output event. In order to produce this output event, the processor may require itself to fetch new events from its input(s), which in turn may ultimately lead to fetching events from the original input streams. On the contrary, in push mode, output events are produced by the arrival of new events at the input side of the processor pipe. Both modes of operation require processors to handle input and output buffers —pull mode asks processors to pile up events into their output buffer, while push mode makes them stack events into their input buffer. The presence of both input and output queues is necessary only to accommodate both modes. A pull-only system could be designed with only output queues, while a push-only system would only require input queues.The interaction with aobject is done through two interfaces:and .Aobject queries events on one of a processor's outputs. For a processor with an output arity of n, there exist n distinct pullables, namely one for each output stream. Every pullable works roughly like classical : it is possible to check whether new output events are available (), and get one new output event (). However, contrarily to iterators, ahas two versions of each method: a “soft” and a “hard” version. “Soft” methods make a single attempt at producing an output event. Since processors are connected in a chain, this generally means pulling events from the input in order to produce the output. However, if pulling the input produces no event, no output event can be produced. In such a case,will return a special value (), andwill return . Soft methods can be seen as doing “one turn of the crank” on the whole chain of processors —whether or not this outputs something.“Hard” methods are actually calls to soft methods until an output event is produced: the “crank” is turned as long as necessary to produce something. This means that one call to, e.g.may consume more than one event from a processor's input. Therefore, calls tonever return(onlyor ), andreturnsonly if no event will ever be output in the future (this occurs, for example, when pulling events from a file, and the end of the file has been reached). For the same processor, mixing calls to soft and hard methods is discouraged. As a matter of fact, the 's behaviour in such a situation is left undefined.Interfaceis the opposite of : rather than querying events form a processor's output (i.e. “pulling”), it gives events to a processor's input. This has for effect of triggering the processor's computation and “pushing” results (if any) to the processor's output. If a processor is of input arity n, there exist n distinct s: one for each input trace.It shall be noted that in BeepBeep, any processor can be used in both push and pull modes. In contrast, CEP systems (with the exception of TelegraphCQ) and runtime monitors generally support a single of these modes.The “lazy” evaluation of Apache Spark is an example of pull mode: upstream data is only generated upon request from downstream consumers. In contrast, the “publish-subscribe” model adopted by some event brokers (like Apache Kafka), corresponds to BeepBeep's push mode: an application subscribes to an event source, and is then notified of incoming events by the platform. This is also the case of Esper and Siddhi, where a user must define a callback function that the system calls whenever new output events are ready to be processed. The reader is referred to Figure <ref> for an example. Surprisingly, this mode of operation, favoured by most engines, is the opposite of what is typically done in classical relational databases; the following shows a Java code sample querying a database using SQL: ResultSet res = st.executeQuery("SELECT * FROMmytable"); while (res.next())int i = res.getInt("a");Once the query is executed, theloop fetches the tuples one by one, which clearly is an example of pull mode. Conversely, the use of push mode in an RDBMS has seldom (if ever) been seen.The notion of push and pull is also present in the field of event-based parsing of XML documents, where so-called “SAX” (push) parsers <cit.> are opposed to “StAX” (pull) parsers <cit.>. XQuery engines such as XQPull <cit.> implement these models to evaluate XQuery statements over XML documents. The use of such streaming XQuery engines to evaluate temporal logic properties on event traces had already been explored in an early form in <cit.>.§.§ Built-in Processors BeepBeep is organized along a modular architecture. The main part of BeepBeep is called the engine, which provides the basic classes for creating processors and functions, and contains a handful of general-purpose processors for manipulating traces. The rest of BeepBeep's functionalities is dispersed across a number of palettes. In the following, we describe the basic processors provided by BeepBeep's engine.§.§.§ Function Processors A first way to create a processor is by lifting any m:n function f into a m:n processor. This is done by applying f successively to each front of input events, producing the output events. The processor responsible for this is called a . A first example of a function processor was shown in Figure <ref>. A function processor is created by applying the “+” (addition) function, represented by an oval, to the left and right inputs, producing the output. Recall that in BeepBeep, functions are first-class objects. Hence thefunction can be passed as an argument when instantiating the . Since this function is 2:1, the resulting processor is also 2:1. Formally, the function processor can be noted as: e_1, …, e_mf_i ≡ f(e_1[i], …, e_m[i]) Two special cases of function processors are worth mentioning. The first is the , which is the function processor where m=n and f is the identity function. The passthrough merely relays to its output what it receives at its input. Theis a m:n processor where f returns the same output, no matter its input. Hence, this processor “mutates” whatever its input is into the same output. Theis a 1:n processor that simply copies its input to its n outputs. A variant of the function processor is the , noted Σ_f^t. Contrarily to the processors above, which are stateless, a cumulative processor is stateful. Given a binary function f : 𝕋×𝕌→𝕋, a cumulative processor is defined as:e_1, e_2Σ_f^t_i ≡ f(e_1, e_2Σ_f^t_i-1, e_2[i])Intuitively, if x is the previous value returned by the processor, its output on the next event y will be f(x,y). The processor requires an initial value t ∈𝕋 to compute its first output.Depending on the function f, cumulative processors can represent many things. If f : ℝ^2 →ℝ is the addition and 0 ∈ℝ is the start value, the processor outputs the cumulative sum of all values received so far. If f : {⊤,,?}^2 →{⊤,,?} is the three-valued logical conjunction and ? is the start value, then the processor computes the three-valued conjunction of events received so far, and has the same semantics as the LTL_3 “Globally” operator. These simple processors can already be mixed. For example, an “average” processor can be built by dividing the output of two streams: one produced by the cumulative sum processor, the other produced by a mutator into the constant 1 piped into another cumulative sum. The result is indeed the sum of events divided by their number.§.§.§ Trace Manipulating Processors A few processors can be used to alter the sequence of events received. We already mentioned the decimator, formally named , which returns every n-th input event and discards the others. Theprocessor, noted ↓, repeats the first event received; it is formally defined ase↓≡ (e_0)^* A processor generates new output events only when being fed an input front. Hence, the previous processor does not output an infinite stream of e_0 right after receiving it; rather, it will output one event for each input event consumed.Another operation that can be applied to a trace is trimming its output. Given a trace e, theprocessor, denoted as _n, returns the trace starting at its n-th input event. This is formalized as follows: e_n ≡ e^n Events can also be discarded from a trace based on a condition. Theprocessor f is a n:n-1 processor defined as follows:e_1, …, e_n-1, e_n_i ≡e_1[i], …, e_n-1[i] ϵThe filter behaves like a passthrough on its first n-1 inputs, and uses its last input trace as a guard; the events are let through on its n-1 outputs, if the corresponding event of input trace n is ⊤; otherwise, no output is produced. A special case is a binary filter, where its first input trace contains the events to filter, and the second trace decides which ones to keep.This filtering mechanism, although simple to define, turns out to be very generic. The processor does not impose any particular way to determine if the events should be kept or discarded. As long as it is connected to something that produces Boolean values, any input can be filtered, and according to any condition —including conditions that require knowledge of future events to be evaluated. Note also that the sequence of Booleans can come from a different trace than the events to filter. This should be contrasted with CEP systems, that allow filtering events only through the use of aclause inside astatement, and whose syntax is limited to a few simple functions.§.§.§ Window Processor Many times, one wants to perform computations over a “sliding window” of all events received, such as the sum of each set of n successive events. This would produce an output sequence where the first number is the sum of events 1, 2, 3, …, n in the input sequence, the second number is the sum of events 2, 3, 4, …, n+1, and so on.Let φ : T^* →U^* be a 1:1 processor. The window processor of φ of width n, noted as Υ_n(φ), is defined as follows:eΥ_n(φ)_i ≡e^iφ_nOne can see how this processor sends the first n events (i.e. events numbered 0 to n-1) to an instance of φ, which is then queried for its n-th output event. The processor also sends events 1 to n to a second instance of φ, which is then also queried for its n-th output event, and so on. The resulting trace is indeed the evaluation of φ on a sliding window of n successive events. In existing CEP engines, window processors can be used in a restricted way, generally within astatement, and only a few simple functions (such as sum or average) can be applied to the window. In contrast, in BeepBeep, any processor can be encased in a sliding window, provided it outputs at least n events when given n fronts. This includes stateful processors: for example, a window of width n can contain a processor that increments a count whenever an event a is followed by a b. The output trace hence produces the number of times a is followed by b in a window of width n.§.§.§ Slicer Theis a 1:1 processor that separates an input trace into different “slices”. It takes as input a processor φ and a function f : 𝕋→𝕌, called the slicing function. There exists potentially one instance of φ for each value in the image of f. If 𝕋 is the domain of the slicing function, and 𝕍 is the output type of φ, the slicer is a processor whose input trace is of type 𝕋 and whose output trace is of type 2^𝕍.When an event e is to be consumed, the slicer evaluates c = f(e). This value determines to what instance of φ the event will be dispatched. If no instance of φ is associated to c, a new copy of φ is initialized. Event e is then given to the appropriateinstance of φ. Finally, the last event output by every instance of φ is collected into a set, and that set is the output event corresponding to input event e. The function f may return a special value #, indicating that no new slice must be created, but that the incoming event must be dispatched to all slices.As a simple example, one may be interested in computing the sum of all odd and even numbers in a trace separately. This can be done by defining the slicing function as f: x ↦ x  2, and φ as theprocessor Σ_+^0, which computes the cumulative sum. Let us consider the trace 2,3,5. Upon receiving the first event, the slicer evaluates f(2) = 0; a new instance of φ is created, and is fed the value 2. Then the last value of all instances of φ is collected, which leads to the set {2}. The process is repeated for the next event, 3. This time, f(3) = 1; a new instance of φ is created, and the output this time becomes {2,3}. When 5 is consumed, it is dispatched to the existing instance of φ associated to f(5) = 1, and the output is now {2,8}.A particular case of slicer is when φ is a processor returning Boolean values; the output of the slicer becomes a set of Boolean values. Applying the logical conjunction of all elements of the set results in checking that φ applies “for all slices”, while applying the logical disjunction amounts to existential quantification over slices.The Slicer is reminiscent of Esper's context partition (cf. Section <ref>). As a matter of fact, one can use for f a function that depends on a processor's context, which may be modified from outside the processor. In such a case, events are dispatched to a slice depending on an external context.§ THE EVENT STREAM QUERY LANGUAGEBeepBeep provides multiple ways to create processor pipes and to fetch their results. The first way is programmatically, using BeepBeep as a library and Java as the glue code for creating the processors and connecting them. For example, the code snippet in Figure <ref> creates the processor chain corresponding to Figure <ref>.Ais instructed to create two copies of its input. The first (or “left”) output of the fork is connected to the “left” input of a processor performing an addition. The second (or “right”) output of the fork is connected to the input of a decimation processor, which itself is connected to the “right” input of the sum processor. One then gets a reference to 's (only) , and start pulling events from that chain. The piping is done through themethod; when a processor has two inputs or outputs, the symbolic names / and / can be used instead of 0 and 1. The symbolic namesandrefer to the (only) input or output of a processor, and stand for the value 0.Another way of creating queries is by using BeepBeep's query language, called the Event Stream Query Language ().is the result of a careful process that went along with the development of BeepBeep's processors. Contrarily to some other systems, where the query language is built-in and inseparable from the underlying processing model, in BeepBeep the query language is just another means of instantiating a chain of processors. Rather than programmatically creating and piping processor instances, anobject can be used to convert a structured text string into that same piping. This means that processors themselves are unaware of the way they have been created. Moreover, we shall see in Section <ref> that even the basic grammar of the language is completely user-modifiable. §.§ Basic Constructs 's grammar follows a modular design: each element of BeepBeep's architecture (processors, functions) comes with its own syntactical rules. The composition of two processors is expressed by enclosing an expression within another one.4pt Top production rule4pt<S> ::= <processor> | <processor-def> ; 4pt Definition of a processor4pt<processor>::= <p-placeholder> | <userdef-proc>; <p-placeholder>::= * ;4pt User-defined processors. Rules get dynamically added here4pt<userdef-proc> ::= gnarfnfar ; 4pt Functions. Rules are added by grammar extensions.4pt<c-function> ::= arfarfarfTable <ref> shows the basic language rules for manipulating processors and functions; two important non-terminal symbols defined there are processor and function.[The grammar shown in this section's tables is a direct formatting of BeepBeep's grammar files, taken from its source code. No modification or adaptation to the files was made for this paper.] Creating a constant function out of a constant symbol c is done by writingc. Applying a function named f on an event trace is done by writingfL, where L is a comma-separated list of expressions that should each parse as a processor. Applying a cumulative processor out of a function f and an input trace P is writtenPf.BeepBeep comes with only a handful of built-in functions: classical Boolean connectives and equality. Logical conjunction and disjunction can also be referred to by their names, so that they can be used inside aexpression. These constructs can be freely mixed, so that one can compute the cumulative sum of events from an input trace P as:Table <ref> shows the syntax for the basic built-in processors included in BeepBeep's core. The syntax for the Freeze, Decimate, Prefix and Trim processors is straightforward; from example, picking one event in every four from some trace P is written asP.The Window processor is slightly more involved. As defined in Section <ref>, the window requires an input trace P, a window width n, and another processor P' to run on each window. This is done by writing . Since P' is itself a processor, its expression contains somewhere a reference to its input trace; this reference is replaced by the special placeholder . For example, the following expression computes the sum of three successive events:The slicer works in a similar way. It requires an input trace P, a slicing function f, and a processor P' to be applied on each slice of the input trace.The last processor shown in Table <ref> is the Collator. This processor is declared using thekeyword, followed by a list of expressions that should parse as processor; each can be given a name using the optional keyword . The collator can be used to apply a computation from more than one input trace. For example, if P and P' are two expressions that produce streams of numbers, the pairwise sum of windows of length 3 from these input streams would be written as:The expression defines two placeholders for events from each input trace, named $A and $B, which are then used in an expression involving a function. The use of the dollar sign ($) is only a convention; placeholders do not necessarily have to start with this symbol. §.§ Creating Definitions One notable feature ofis the capability for a user to extend the grammar dynamically through expressions, using thekeyword. The corresponding syntactical rules are described in Table <ref>.For example, the following code snippet shows how a new processor counting the elements of a trace can be defined by the user:@P @P1@P. The second line of the expression declares a new rule for the non-terminal symbol ⟨⟩ in the grammar of Table <ref>. It gives the syntax for that new rule; in that case, it is the expression , followed by the symbol “@P”. The first line declares that “@P” must be a grammatical construct whose parsing matches the non-terminal ⟨⟩. Finally, the remainder of the expression describes what@P should be replaced with when evaluating an expression; in this case, it is anstatement. From that point on,@P can be used anywhere in an expression where a grammatical construct of type ⟨⟩ is required, and this expression itself can accept for @P any processor expression.This mechanism proves to be much more flexible than user-defined functions provided by other languages, as any element of the original grammar can be extended with new definitions, themselves involving any other grammatical element. For example, one can easily define a numerical constant with an expression like3.1416. This is a special case of the processor-def grammatical construct in Table <ref>, where theclause is empty. § EXTENDING BASIC FUNCTIONALITIESBeepBeep was designed from the start to be easily extensible. Any functionality beyond the few built-in processors presented in Section <ref> is implemented through custom processors and grammar extensions, grouped in packages called palettes. Concretely, a palette is implemented as a JAR file that is loaded with BeepBeep's main program to extend its functionalities in a particular way, through the mechanisms described in this section.This modular organization is a flexible and generic means to extend the engine to various application domains, in ways unforeseen by its original designers. Palettes make the engine's core (and each palette individually) relatively small and self-contained, easing the development and debugging process. Moreover, for any given application, only the engine and a small number of palettes need to be loaded; this results in fewer lines of dead code than what a monolithic piece of software would achieve.Finally, it is hoped that BeepBeep's palette architecture, combined with its simple extension mechanisms, will help third-party users contribute to the BeepBeep ecosystem by developing and distributing extensions suited to their own needs. §.§ Creating Custom Processors Sometimes, creating a new processor cannot easily be done by combining existing onesusing theconstruct. BeepBeep also allows users to define their own processors directly as Java objects, using no more than a few lines of boilerplate code. The simplest way to do so is to extend theclass, which takes care of most of the “plumbing” related to event management: connecting inputs and outputs, looking after event queues, etc. All that is left to do is to define its input and output arity, and to write the actual computation that should occur, i.e. what output event(s) to produce (if any), given an input event. We illustrate this process on a small example.The minimal working example for a custom processor is made of six lines of code, and results in a processor that accepts no inputs, and produces no output: import ca.uqac.lif.cep.*; public class MyProcessor extends SingleProcessorpublic MyProcessor()super(0, 0);public Queue<Object[]> compute(Object[] inputs)return null; §.§.§ Example 1: Euclidean Distance Consider a processor that takes as input two traces. The events of each trace are instances of a user-defined class , which contains member fieldsand . We will write a processor that takes one event (i.e. one ) from each input trace, and return the Euclidean distance between these two points. The input arity of this processor is therefore 2 (it receives two points at a time), and its output arity is 1 (it outputs a number). Specifying the input and output arity is done through the call toin the processor's constructor: the first argument is the input arity, and the second argument is the output arity.The actual functionality of the processor is written in the body of method . This method is called whenever an input event is available, and a new output event is required. Its argument is an array of Java s; the size of that array is that of the input arity that was declared for this processor (in our case: 2). public class EuclideanDistance extends SingleProcessorpublic EuclideanDistance()super(2, 1); public Queue<Object[]> compute(Object[] inputs)Point p1 = (Point) inputs[0]; Point p2 = (Point) inputs[1]; float distance = Math.sqrt(Math.pow(p2.x - p1.x, 2) + Math.pow(p2.y - p1.y, 2)); return Processor.wrapObject(distance);Themethod must return a queue of arrays of objects. If the processor is of output arity n, it must put an event into each of its n output traces. It may also decide to output more than one such n-uplet for a single input event, and these events are accumulated into a queue —hence the slightly odd return type. However, if the processor outputs a single element, the tedious process of creating an array of size 1, putting the element in the array, creating a queue, putting the array into the queue and returning the queue is encapsulated in the static method , which does exactly that.§.§.§ Example 2: MaximumAs a second example, we create a processor that outputs the maximum between the current event and the previous one. That is, given the following input trace 5, 1, 2, 3, 6, 4, …, the processor should output: (nothing), 5, 2, 3, 6, 6, …. Notice how, after receiving the first event, the processor should not return anything yet, as it needs two events before saying something. Here is a possible implementation: public class MyMax extends SingleProcessorNumber last = null; public MyMax()super(1, 1); public Queue<Object[]> compute(Object[] inputs)Number current = (Number) inputs[0], output; if (last != null)output = Math.max(last, current); last = current; return Processor.wrapObject(output);elselast = current; return Processor.getEmptyQueue(); This example, as well as the previous one, are meant to illustrate how to create custom processors. However, in both cases, it is possible to achieve the same functionality by composing basic processors already provided by BeepBeep. In the first case, one could define a binary function , and encase it into a ; in the second case, one could apply a , usingas the function to evaluate. §.§ Grammar Extensions By creating a custom processor, it is possible to pipe it to any other existing processor, provided that its input and output events are of compatible types. We have seen in Section <ref> how a combination of existing processors can be defined directly within ; it is also possible to extend the grammar of thelanguage for a customobject, so that it can be used directly inqueries.As an example, let us consider the following processor, which repeats every input event n times, where n is a parameter decided when the processor is instantiated. Its implementation is as follows: public class Repeater extends SingleProcessorprivate final int numReps;public Repeater(int n)super(1, 1); this.numReps = n; public Queue<Object[]> compute(Object[] inputs)Queue<Object[]> queue = new LinkedList<Object[]>(); for (int i = 0; i < this.numReps; i++)queue.add(inputs);return queue;The first step is to decide what syntax one shall use to invoke the processor.A possibility could be: “p”. In this syntax, p refers to any otherexpression that builds a processor, and n is a number. The result of this expression is itself another object of type .The second step is to tell the BeepBeep interpreter to add to its grammar a new case for the parsing of the existing ⟨⟩ rule. This rule should correspond to the parsing of the newly-definedprocessor. This is done as follows:Interpreter my_int = new Interpreter(); my_int.addCaseToRule("<processor>", "<repeater>"); my_int.addRule("<repeater>","REPEAT ( <processor> ) <number> TIMES"); my_int.addAssociation("<repeater>", "my.package.Repeater"); The second instruction tells the interpreter that ⟨⟩ can be parsed as a ⟨⟩. The parsing pattern for this non-terminal is then added with the call to . This allows the interpreter to know thatcorresponds to a processor. The last call tells the interpreter that encountering therule will result in the instantiation of a Java object of the class . This second argument should be the fully qualified name of the class. That is, if Repeater is located in package my.package, then one should write my.package.Repeater in the call to . Upon parsing the ⟨⟩ rule, the interpreter will look for a method calledin the corresponding class. The task of themethod is to consume elements of the parse stack to build a new instance of the object to create, and to put that new object back on the stack so that other objects can consume it during their own construction. Creating a new instance of Repeater is therefore straightforward. One simply has tothe stack to fetch the value of n and theobject to use as input, and discard all “useless” keywords.One can then instantiate a new , pipe the input into it (using ), and put the resulting object on the stack. public static void build(Stack<Object> stack)stack.pop(); // TIMES Number n = (Number) stack.pop(); stack.pop(); // ) Processor p = (Processor) stack.pop(); stack.pop(); // ( stack.pop(); // REPEAT Repeater r = new Repeater(n.intValue()); Connector.connect(p, r); stack.push(r);The possibility of extending 's grammar in such a way is a feature unique to the BeepBeep event stream query engine. Adding new grammatical constructs is actually more powerful than simply allowing user-defined functions, as is done in some other ESP engines. It allowsto be extended to become a Domain-Specific Language (DSL). As a matter of fact, even the grammar for the built-in processors it soft-coded: it can be completely rewritten at runtime. Therefore, , as described in this paper, is only a “suggestion” of syntax. Altering the basic grammar of any one of the other systems described in this paper is simply not offered to the user.This feature required the development of a special parser called Bullwinkle[<https://github.com/sylvainhalle/Bullwinkle>]. Commonly used libraries, such as Yacc or Bison, are parser generators: given a grammar, they generate the code corresponding to a parser for that grammar, which can then be included within another application. However, changing this grammar requires re-generating the parser, and hence recompiling the application that uses it. It is clear that such libraries are ill-suited for use cases where new rules can be dynamically added during execution. In contrast, Bullwinkle reads a grammar and parses expressions at run time, making it possible for the grammar to be modified at will by a user. §.§ Existing Palettes We describe a few of the palettes that have already been developed for BeepBeep in the recent past. These palettes are available alongside BeepBeep from a companion software repository.[<https://github.com/liflab/beepbeep-3-palettes>] §.§.§ Tuples and JDBC Of particular interest to this paper is the palette manipulating events that are associative maps of scalar values —in other words, tuples in the relational sense of the term. In addition, the palette includes a few utility functions for manipulating tuples.Theprocessor allows a tuple to be created by naming and combining the contents of multiple input events. Theprocessor transforms input events from multiple traces into an array (which can be used by ), and theprocessor internally duplicates an input trace and sends it into aevaluating some function. Combined together, these processors provide the same kind of functionality as the SQL-likestatement of other CEP engines.To this end, the palette defines a new grammatical construct, called , that allows an output tuple to be created by picking and combining attributes of one or more input tuples. The grammar extension for thestatement is given in Table <ref>. For the sake of simplicity, we only show a few arithmetical functions that manipulate numerical values; the actual syntax ofcan easily be made to accommodate functions manipulating other types of scalar values.One can see how this syntax precisely mirrors the basic form of SQL's command of same name. In contrast to thestatement found in other ESP tools, 's only manipulates tuples, and not traces. Operations such as filtering or windowing are obtained by composing this statement with other constructs from BeepBeep's grammar. For example, selecting tuples that match some condition is done by piping the output ofinto BeepBeep'sprocessor, which is invoked syntactically through thekeyword, as the grammar of Table <ref> has already shown. This, as it turns out, results in an expression that reads exactly like SQL's , ensuring the backward compatibility that was one of the design goals stated in Section <ref>.This palette also allows BeepBeep to be used through Java's JDBC API, as shown in Figure <ref>. This makes it possible to access the BeepBeep interpreter like any other relational database engine. This is also in line with one BeepBeep' design goal of relational transparency. Surprisingly, despite their obvious roots in database theory, few of the other CEP engines considered in this study (and none of the runtime monitors) provide the same functionality.§.§.§ First-Order Linear Temporal Logic This palette provides processors for evaluating all operators of Linear Temporal Logic (LTL), in addition to the first-order quantification defined in LTL-FO^+ (and present in previous versions of BeepBeep) <cit.>. Each of these operators comes in two flavours: Boolean and “Troolean”.Boolean processors are called , , , ,and . If a_0 a_1 a_2 … is an input trace, the processorproduces an output trace b_0 b_1 b_2 … such that b_i = if and only there exists j ≥ i such that b_j =. In other words, the i-th output event is the two-valued verdict of evaluating φ on the input trace, starting at the i-th event. A similar reasoning is applied to the other operators.Troolean processors are called , , , ,and . Each is associated to the Boolean processor with a similar name. If a_0 a_1 a_2 … is an input trace, the processorproduces an output trace b_0 b_1 b_2 … such that b_i = if there exists j ≤ i such that b_j =, and “?” (the “inconclusive” value of LTL_3) otherwise. In other words, the i-th output event is the three-valued verdict of evaluating φ on the input trace, after reading i events.Note that these two semantics are distinct, and that both are necessary in the context of event stream processing. Consider the simple LTL property a → b. In a monitoring context, one is interested in Troolean operators: the verdict of the monitor should be the partial result of evaluating an expression for the current prefix of the trace. Hence, in the case of the trace accb, the output trace should be ???⊤: the monitor comes with a definite verdict after reading the fourth event.However, one may also be interested in using an LTL expression φ as a filter: from the input trace, output only events such that φ holds. In such a case, Boolean operators are appropriate. Using the same property and the same trace as above, the expected behaviour is to retain the input events a, c, and c; when b arrives, all four events can be released at once, as the fate of a becomes defined (it has been followed by a b), and the expression is true right away on the remaining three events. This behaviour is similar to that of an enforcement automaton <cit.>.First-order quantifiers are of the form ∀ x ∈ f(e) : φ and ∃ x ∈ f(e) : φ. Here, f is an arbitrary function that is evaluated over the current event; the only requirement is that it must return a collection (set, list or array) of values. An instance of the processor φ is created for each value c of that collection; for each instance, the processor's context is augmented with a new association x ↦ c. Moreover, φ can be any processor; this entails it is possible to perform quantification over virtually anything.The LTL palette provides its own extensions to , shown in Table <ref>.§.§.§ Finite-State Machines This palette allows one to define a Moore machine, a special case of finite-state machine where each state is associated to an output symbol. This Moore machine allows its transitions to be guarded by arbitrary functions; hence it can operate on traces of events of any type.Moreover, transitions can be associated to a list ofobjects, meaning that the machine can also query and modify itsobject. Depending on the context object being manipulated, the machine can work as a pushdown automaton, an extended finite-state machine <cit.>, and multiple variations thereof. Combined with the first-order quantifiers of the LTL-FO^+ package, a processing similar to Quantified Event Automata (QEA) <cit.> is also possible.§.§.§ Other Palettes Among other palettes, we mention: Gnuplot This palette allows the conversion of events into input files for the Gnuplot application. For example, an event that is a set of (x,y) coordinates can be transformed into a text file producing a 2D scatterplot of these points. An additional processor can receive these strings of text, call Gnuplot in the background and retrieve its output. The events of the output trace, in this case, are binary strings containing image files.[An example of BeepBeep's plotting feature can be seen at: <https://www.youtube.com/watch?v=XyPweHGVI9Q>] XML, JSON and CSV The XML palette provides a processor that converts text events into parsed XML documents. It also contains aobject that can evaluate an XPath expression on an XML document. Another palette provides the same functionalities for events in the JSON and the CSV formats. Network packets This palette allows events to be created from traces of network packets captured from a network interface, by making use of the JNetPcap library. It defines a number of functions to extract data from these captured packets, such as their header fields or payload content. Combined with the FSM and LTL palettes, it can be used to express complex sequential patterns over network packets, and form the basis of an Intrusion Detection System (IDS). Web Sockets This palette provides a simple way of serializing event data and transmit it through a web socket.By splitting a query graph across multiple machines and interposing a web socket at their interfaces, a basic form of distribution of computation can be achieved with virtually no configuration required.§ USE CASES REVISITEDThe previous sections have shown that BeepBeep's architecture is very generic: it allows arbitrary event types, free mixing of processors from various palettes, windowing over any processor, and an extensible query language.However, our experience with members of the industry has revealed that the advantages of such genericity may not be immediately obvious. It seems that some of them are somehow conditioned to think only of problems that can be fitted into the system they already use; the non-standard features available in BeepBeep have been frequently dismissed by consequence of this thinking “inside the box”.This is why we feel necessary to demonstrate using numerous and explicit examples the range of different problems that can be tackled thanks to BeepBeep's generic architecture. In this section, we revisit every use case shown in Section <ref>, and show how each can be handled using the variety of processors and functions described earlier. §.§ Stock Ticker Our first example involves processing events from the Stock Ticker scenario. We show how the tumble window of Query <ref> can be written by combining BeepBeep processors. The result is shown in Figure <ref>. In this figure, events flow from the left to the right. First, we calculate the statistical moment of order n of a set of values, noted E^n(x). As Figure <ref> shows, the input trace is duplicated into two paths. Along the first (top) path, the sequence of numerical values is sent to thecomputing the n-th power of each value; these values are then sent to athat calculates the sum of these values. Along the second (bottom) path, values are sent to aprocessor that transforms them into the constant 1; these values are then summed into another . The corresponding values are divided by each other, which corresponds to the statistical moment of order n of all numerical values received so far. The average is the case where n=1.Figure <ref> shows the chain that computes the average of stock symbol 1 over a window of 5 events. Incoming tuples are first filtered according to a function, which fetches the value of the stockSymbol attribute and compares it to the value 1. The processor that is responsible for this filtering is akin to SQL'sprocessor. The tuples that get through this filtering are then converted into a stream of raw numbers by fetching the value of their closingPrice attribute. The statistical moment of order 1 is then computed over successive windows of width 5, and one out of every five such windows is then allowed to proceed through the last processor, producing the desired hopping window query.This example introduces colour coding to represent event streams of various types. Orange pipes represent streams of tuples; turquoise pipes contain streams of raw numbers. §.§ Healthcare System We show how Query <ref> can be computed using chains of function processors. We can reuse the statistical moment processor E^n(x) defined above, and use it for the average (n=1) and standard deviation (n=2). Equipped with such processors, the desired property can be evaluated by the graph shown in Figure <ref>. The input trace is divided into four copies. The first copy is subtracted by the statistical moment of order 1 of the second copy, corresponding to the distance of a data point to the mean of all data points that have been read so far. This distance is then divided by the standard deviation (computed form the third copy of the trace). Athen evaluates whether this value is greater than the constant trace with value 1. The result is a trace of Boolean values. This trace is itself forked into two copies. One of these copies is sent into aprocessor, that removes the first event of the input trace; both paths are sent to a processor computing their logical conjunction. Hence, an output event will have the value ⊤ whenever an input value and the next one are both more than two standard deviations from the mean.Note how this chain of processors involves events of two different types: turquoise pipes carry events consisting of a single numerical value, while grey pipes contain Boolean events. §.§ Signal Processing Figure <ref> describes the chain of basic event processors that are used to discover the peaks on the electrical signal. The signal from the electrical box is sent to a first processor, which transforms raw readings into name-value tuples, one for each time point. Each tuple contains numerical values for various components of the electrical signal; for example, parametermeasures the current active power of Phase 1.The second processor picks one such parameter from the tuple, extracts its value, and discards the rest. The output trace from this processor is therefore a sequence of numbers. This sequence is then fed to the third processor, which detects sudden increases or decreases in a numerical signal.For each input event, the processor outputs the height of the peak, or the value 0 if this event is not a peak. Since an event needs to be out of the window to determine that it is a peak, the emission of output events is delayed with respect to the consumption of input events.The next step in the processing takes care of removing some of the noise in the signal. Typical appliances consume at least 100 W and generate a starting peak much higher than that. Therefore, to avoid false positives due to noise, any peak lower than 100 W should be flattened to zero.In order to do so, the output from the peak detector is replicated in two traces. The first one (top) is sent to a simple comparator, which compares the input value with the constant trace 100, and returns either true or false. This result is the first input of the dispatcher processor, represented in Figure <ref> by traffic lights. The second input of the dispatcher is the output of the peak detector itself, while its third input, in this case, is the constant trace 0. The dispatcher's task is simple: given a triplet of events (e_1, e_2, e_3), (one from each of its inputs), output e_2 if e_1 is true, and output e_3 otherwise. In the present case, this has indeed for effect of replacing all events of the peak detector lower than 100 W to 0.The resulting trace requires one further cleanup task. Again due to the nature of the electrical signal, two successive peak events may sometimes be reported for the same sudden increase. The last processor takes care of keeping only the first one.This yield processor behaves like the dispatcher, but with the additional guarantee that the second input will be selected at most once in every n successive events. In the present context, this has for effect of eliminating “ghost” peaks in the signal.Given a feed from an electrical signal, this complete chain of processors produces an output trace of numerical events; most of them should be null, and a few others should indicate the occurrence of an abrupt increase or decrease in the values of the input signal, along with the magnitude of that change. Moreover, the position of these events, relative to the original signal, also indicates the exact moment this change was detected. As an example, Figure <ref> shows the realtime value of three components of the electrical signal, to which the output of the peak detector was superimposed. One can see that the detector behaves as we want, reporting exactly two changes of the appropriate magnitude at the right time.The second step is to lift peak and drop events to a yet higher level of abstraction, and to report actual appliances being turned on and off. This is best formalized through the use of a Moore machine, shown in Figure <ref>. From the initial state, the event “appliance on” () is output only if a peak and a plateau event of the appropriate magnitude are received in immediate succession. At this point, the event “appliance off” () is emitted only if a drop of the appropriate magnitude is received. All other input events processed by the machine result in no output event being produced. Apart from the actual numerical values, this Moore machine is identical for all appliances. Notice how the abstraction performed in Step 1 simplifies the problem in Step 2 to the definition of a simple, five-state automaton. §.§ Online Auction System Our next example is a modified version of the auction system. Rather than simply checking that the sequencing of events for each item is followed, we will take advantage of BeepBeep's flexibility to compute a non-Boolean query: the average number of days since the start of the auction, for all items whose auction is still open and in a valid state.The processor graph is shown in Figure <ref>. It starts at the bottom left, with aprocessor that takes as input tuples of values. The slicing function is defined in the oval: if the event is endOfDay, it must be sent to all slices; otherwise, the slice is identified by the element at position 1 in the tuple (this corresponds to the name of the item in all other events). For each slice, an instance of a Moore machine will be created, as shown in the top part of the graph.Each transition in this Moore machine contains two parts: the top part is a function to evaluate on the input event, to decide whether the transition should fire. The bottom part contains instructions on how to modify theobject of the processor. For example, the top left transition fires if the first element of the event is the string “Create Auction”. If so, the transition is taken, and the processor's context is updated with the associations Last Price ↦ 0, Days ↦ 0. The values of Min. Price and Max. Days are set with the content of the third and fourth element of the tuple, respectively. The remaining transitions take care of updating the minimum price and the number of days elapsed according to the events received.Each state of the Moore machine is associated with an output value. For three of these states, the value to output is the empty event, meaning that no output should be produced. For the remaining two states, the value to output is the current content of Days, as defined in the processor's context.According to the semantics of the , each output event will consist of a set, formed by the last output of every instance of the Moore machine. Thus, this set will contain the number of elapsed days of all items whose auction is currently open (the Moore machine for the other items outputs no number). This set is then passed to a function processor, which computes the average of its values (sum divided by cardinality).As a bonus, we show how to plot a graph of the evolution of this average over time. We fork the previous output; one branch of this fork goes into a , which turns the set into the value 1; this stream of 1s is then sent to a cumulative function processor Σ_+^0 that computes their sum. Both this and the second branch of the fork are fed into a function processor, that creates a named tuple where x is set to the value of the first input, and y is set to the value of the second input. The result is a tuple where x is the number of input events, and y is the average computed earlier. These tuples are then accumulated into a set with the means of another cumulative function processor, this time performing the set addition operation. The end result is a stream of sets of (x,y) pairs, which could then be sent to aprocessor to be plottedwith the help of Gnuplot.One can see again that processors of multiple palettes are involved, and events of various types are mixed: predicates (pink), sets of numbers (grey), numbers (turquoise), and named tuples (yellow). §.§ Runtime Verification The next example is taken from our previous work on the monitoring of video games <cit.>. The property we wish to check is that every time a Walker encounters a Blocker, it must turn around and start walking in the opposite direction. An encounter occurs whenever the (x,y) coordinates of the Walker come within 6 pixels horizontally, and 10 pixels vertically, of some Blocker. When this happens, the Walker may continue walking towards the Blocker for a few more events, but eventually turns around and starts walking away. Figure <ref> shows the processor graph that verifies this. Here, blue pipes carry XML events, turquoise pipes carry events that are scalar numbers, and grey pipes contain Boolean events. The XML trace is first sent into a universal quantifier. The domain function, represented by the oval at the top, is the evaluation of the XPath expressionon the current event; this fetches the value of attributeof all characters whose status is . For every such value c, a new instance of the underlying processor will be created, and the context of this processor will be augmented with the association p_1 ↦ c. The underlying processor, in this case, is yet another quantifier. This one fetches the ID of every , and for each such value c', creates one instance of the underlying processor and adds to its context the association p_2 ↦ c'.The underlying processor is the graph enclosed in a large box at the bottom. It creates two copies of the input trace. The first goes to the input of a function processor evaluating function f_1 (not shown), on each event. This function evaluates |x_1 - x_2| < 6 ∧ |y_1 - y_2| < 10, where x_i and y_i are the coordinates of the Pingu with ID p_i. Function f_1 is thedescribed in Figure <ref>. Its left branch fetches the x position of characters with ID p_1 and p_2, and checks whether their absolute difference is greater than 6. Its right branch (not shown) does a similar comparison with the y position of both characters. Note in this case how the XPath expression to evaluate refers to elements of the processor's context (p_1 and p_2). The resulting function returns a Boolean value, which is true whenever character p_1 collides with p_2. The second copy of the input trace is duplicated one more time. The first is sent to a function processor evaluating f_2, which computes the horizontal distance between p_1 and p_2. The second is sent to theprocessor, which is instructed to remove the first three events it receives and lets the others through. The resulting trace is also sent into a function processor evaluating f_2. Finally, the two traces are sent as the input of a function processor evaluating the condition >. Therefore, this processor checks whether the horizontal distance between p_1 and p_2 in the current event is smaller than the same distance three events later. If this is true, then p_1 moved away from p_2 during that interval.The last step is to evaluate the overall expression. The “collides” Boolean trace is combined with the “moves away” Boolean trace in theprocessor. For a given event e, the output of this processor will be ⊤ when, if p_1 and p_2 collide in e, then p_1 will have moved away from p_2 three events later. § EXPERIMENTAL EVALUATIONAs discussed earlier, BeepBeep was partly designed in reaction to the complexity and heaviness of existing event processing systems; to this end, versatility and simplicity were the primary goals informing all of our design decisions. Therefore, benchmarking BeepBeep against competing CEP tools somehow misses the point: performance, although desirable, was never sought at the price of readable queries or extensibility. Moreover, research papers reporting the use of BeepBeep in various real-world situations (web service testing <cit.>, electric load monitoring <cit.>, video game debugging <cit.>) have already shown it is “fast enough” for these use cases. Nevertheless, we felt fitting to conduct an experimental comparison for two reasons. First, few works provide an experimental comparison of CEP tools on the same queries and input data. The most recent and thorough effort of that sort is the RIoTBench platform <cit.>, which has measured throughput and resource consumption of Apache Storm on the Microsoft Azure public Cloud.However, the benchmark focuses on distributed event stream processing and includes a single system in its analysis. The older BiCEP system seemed to share a similar goal <cit.>; unfortunately, the link provided in BiCEP's paper points to an empty web site, so its implementation does not appear to be extant at the time of this writing. One of the papers describing Siddhi does compare it to Esper on three queries <cit.> (filter, sliding window, pattern), and another paper compares Esper's throughput with T-REX <cit.> on four. This section is by no means a comprehensive study, but it does provide some empirical substance for the relative merits of each evaluated tool. To the best of our knowledge, the modest empirical review presented in this section is the first published account of a comparison of more than two CEP engines on the same queries.Second, based on actual discussions and presentations we had with members of both industry and academia, BeepBeep's features have frequently been dismissed on the grounds that “surely, this can also be done with software X”. We shall go to some lengths to provide detailed evidence to the contrary, in some of the use cases we exposed earlier.§.§ Experimental Setup Our benchmark focuses on single-machine event stream processing systems similar to BeepBeep. The query engines included in our benchmark are:* SASE (cf. Section <ref>). Our benchmark includes version 1.0 of the software. Its documentation states that some advanced features such as processing streams with imprecise timestamps are not included in this release. However, none of our use cases require these features.* Siddhi (cf. Section <ref>). Our benchmark includes version 3.0.3 of the software.* Esper (cf. Section <ref>). Our benchmark includes version 5.3.0 of the software.* MySQL [<http://mysql.com>]. Our benchmark includes version 5.5 of the software. Although MySQL is not an ESP system, the “this could be done with a database” argument was raised often enough to warrant its inclusion in our study. All these tools were used with their default settings. Although the latest version of Cayuga (dating from 2009) is publicly available, some libraries required to build it are unavailable as of 2017. Our attempts to obtain help from the authors have unfortunately remained unanswered, which forced us to exclude it from the benchmark. We also purposefully excluded cloud platforms such as Microsoft Azure, Apache Spark and VoltDB. Their use of multiple machines, and the heavy setup they require before being functional,[SQLstream alone requires a whopping 1 gigabyte of disk space for its basic installation. This should be contrasted with Esper, Siddhi and BeepBeep, which are stand-alone bundles of at most a few megabytes.] does not place them on an equal footing with the other systems we consider.We also remind that our goal is not to claim that BeepBeep is the fastest CEP software around, but that reasonable performance can be expected for the ease of use it offers. Table <ref> shows the relative footprint of each tool, expressed as the cumulative size of the program and all its library dependencies.The experiments were implemented using the LabPal testing framework[<https://liflab.github.io/labpal>]. The principle behind LabPal is that all the necessary code, libraries and input data should be bundled within a single self-contained executable file, such that anyone can download and independently reproduce the experiments. The detailed list of all the queries and input streams included in our benchmark cannot be shown in this paper due to lack of space; however all input files are available from our downloadable lab instance[<https://datahub.io/dataset/beepbeep-3-benchmark>]. All the experiments were run on a , inside a Java 8 VM withMB of memory. All experiments were given a timeout ofseconds. §.§ Relative Expressiveness Our original intent was to take each of the twelve queries described in Section <ref>, and to compare the behaviour of each tool on these queries. Except for very simple queries, attempting to write the same computation in languages with different and sometimes incompatible syntax and semantics is a non-trivial and generally imperfect exercise <cit.>. Our plan was cut short by the limitations imposed by other tools, either on the allowed event types or the query language they offer. Our experiments could humorously be summarized as attempts at fitting a square peg in a round hole.Table <ref> gives a summary of the support for each query by the tools included in our study. The checkmark () symbol indicates that the system can compute the exact result of the query. The “” symbol indicates that limitations in the tool would force us to evaluate a simplified version of the query. This is the case, for example, when XML events have to be flattened into fixed-size tuples, or when a query language imposes that the distance between two events in a pattern be bounded by a finite value. Finally, the cross symbol () indicates that there is no reasonable way to handle this query with the tool. We had to come to this verdict in cases where the problem would only be solvable in extremely convoluted ways: for example, computing the two-dimensional heat map of Query <ref> (whose size is unknown in advance) using only tuples with a fixed schema.In the following, we give further details on the way each query was handled (or not) by each tool.§.§.§ Stock TickerThis use case is closest to “traditional” CEP problems and presents the least issues in terms of tool support. However, SASE cannot handle some of the ticker queries, due to the fact that its implementation lacks support for aggregate functions. §.§.§ Healthcare RecordsThis scenario presents more problems. Since some of the tools impose that events be tuples, HL7 events must be replaced by tuples with dummy attribute names a_1, …, a_n. In each event, attribute a_i has for value the i-th field of the corresponding HL7 message. However, this brings an additional problem, as the i-th field of each message may not be of the same type. Moreover, even with such a manual doctoring of the inputs, the expression of these properties is still problematic. Aggregation functions in Siddhi and Esper are over fixed windows. In Siddhi, one can easily compute the standard deviation of a field over multiple events, as well as its mean. However, what is expected in Query <ref> is the ratio of these two quantities; alas, an expression like(and variations thereof) are all rejected as a syntax error. A workaround would be to generate a stream computing , and another computing ; however, matching events from these two streams cannot be merged, apart from computing their join; this, in turn, requires a fake counter to be added to each event to be used as the join attribute.§.§.§ Online Auction System This use case presents the same problem as the previous one, since events in the trace do not have the same attributes and values. Events should therefore be simplified so that each has a name attribute, and three other “dummy” attributes whose meaning differs according to name. Item names have also been turned into numbers so that all these fields can be of the Integer type.After these simplifications, Query <ref> can be accommodated in Siddhi, Esper and SASE using their pattern syntax. However, Query <ref> correlates a value inside an event (the Duration of an auction) to the number of endOfDay events that may be seen before bids of an item become forbidden. Unfortunately, we found no sensible way of expressing this fact using the query language of any other tool. We did manage, however, to write an SQL query achieving this result.§.§.§ Electric Load MonitoringAll systems but BeepBeep are discarded, as they lack the peak detection algorithms necessary to perform the first level of abstraction of the original input trace. It goes without saying that tuple query languages are very ill-suited for this task; the best one can do using SiddhiQL or EQL queries would be to define a pattern of n successive events, and detect large differences between the first and the n-th —which is a very imprecise characterization of a peak. Note also that it does not suffice to watch the min/max difference over a sliding window, as the same peak may be detected more than once (or not at all in the case of a hopping window query, if the peak occurs across the boundary of two successive windows).However, even assuming a correct peak input stream, CEP query languages still have trouble expressing the Moore machine required to produce the final output trace from the trace of peaks. The best one can do is write two queries, one producing a “Toaster On” event whenis greater than 800, and a “Toaster Off” whenis smaller than -800 —and again multiplex these two streams to produce the desired output. At this point, the original problem has gone through a handful of simplifications and approximations, and some features (such as multiplexing) are still missing to actually run it using a CEP engine. We hope the reader agrees with our conclusion that the load monitoring problem cannot be reasonably solved using other tools.§.§.§ Runtime Verification At the risk of being tedious, we also show the issues faced when attempting to write the video game queries in SiddhiQL or EQL. First, nested XML events must be converted into a sequence of tuples, one for each Pingu inside the event. The same artificial timestamp is appended to these events so that they can remain grouped. Even though Esper supports nested events, its query language lacks a “for all” construct, so the tuple conversion is also necessary. Then, matching a Blocker and a Walker within the same event becomes problematic, as the unrolling of an XML event may sometimes put the Blocker before the Walker, or after; the pattern query has to consider the two possible orderings. §.§.§ Synthetic Traces Given that many of the use cases on which BeepBeep was showcased are handled with difficulty (if at all) by other tools, we reversed our experimental evaluation, and instead took BeepBeep to “their” field. We focused our experimental evaluation on simple synthetic traces of tuples made of random strings and numerical values; the traces considered containevents. We devised a number of “generic” queries on these traces, intended to probe the basic query types described in Section <ref>. The queries we included are shown in Table <ref>.Even then, the last query is problematic, as no query language (except BeepBeep's) provides an easy way to count events in a sliding window that satisfy a condition. One can select events that satisfy a condition and then create a window on the resulting stream, but this does not yield the desired result (no event is output if the condition is not satisfied). In the present case, since our condition is simply x > y, a workaround is to evaluatemax(0, x-y/|x-y|)which returns 1 if x > y and 0 otherwise, and then to sum these values over a window. However, this trick hardly generalizes to more complex conditions. §.§ Measured Throughput Each tool was run on its own version of each query on randomly generated traces as described above. Since MySQL is not an event processing engine, it cannot operate in a streaming fashion. We converted the input trace into one largestatement, and then ran an SQL query on that table; we interpret as an output “stream” the table that results from that query (with each row of the table being assimilated to an event). Note that in the results given below, running time includes the execution of . This is to establish a fair comparison, as all other systems start their processing from a stream that is completely unknown in advance.We measured the elapsed time taken by each system to process the queries, and deduced from this time the throughput, measured in Hz (number of input events consumed per second). The results are plotted in the histogram of Figure <ref>.Two of the contenders had to be disqualified from the onset. The first is SASE, which for traces of that size, crashes by running out of heap space. The second is MySQL, which exceeded the timeout on all queries. Figure <ref> gives part of the explanation. It shows one of the event stream queries (S6) written as an SQL expression. To simplify the query, a view on the original trace (i.e. table) is first created; otherwise, the corresponding expression would have to be repeated three times in the followingstatement. Note that this view already involves a self-join. The query itself is far from a one-liner: it must create an intricate condition on timestamps for three copies of the original trace in order to correctly express the sequential pattern to be observed. While the query does compute the correct result, the absence of support for even simple sequential patterns makes it so complex that its evaluation is not practically feasible. A similar argument had already been made in <cit.>. For this reason, our plots do not include MySQL in the experimental results.[language=sql] DROP TABLE IF EXISTS ThePrices;CREATE VIEW ThePrices ASSELECT T1.closingPrice AS p1, T2.closingPrice AS p2, T1.timestamp AS timestampFROM stocks AS t1, stocks AS t2WHERE t1.timestamp = t2.timestamp;SELECT COUNT(*) FROM (SELECT timestamp FROM ThePrices WHERE p2 < 2) AS T0, ( SELECT T1.timestamp - 1 AS timestampFROM ThePrices AS T1, (SELECT MAX(TA.timestamp) AS n1, TB.timestamp AS n2 FROM (SELECT timestamp FROM ThePrices WHERE p1 <= p2) AS TA JOIN (SELECT timestamp FROM ThePrices WHERE p2 < 2) AS TB WHERE TA.timestamp < TB.timestamp GROUP BY TB.timestamp ) AS T2 WHERE T1.timestamp > T2.n1 AND T1.timestamp <= T2.n2) AS T3 WHERE T0.timestamp = T3.timestamp 6inA second observation is that, for most of the queries, Siddhi and Esper provide comparable throughput, with BeepBeep having on average half their throughput on our sample of queries. We are actually happily surprised with these results, and expected a much larger difference between commercial-grade CEP systems and our proposed implementation. For example, in BeepBeep, computing the average of a sequence of values is not done by a built-in primitive function, as is the case with Siddhi and Esper; rather, Figure <ref> shows it is a user-defined combination of basic processors, involving a fork, two cumulative functions and a division processor. This clearly impacts performance, but as discussed above, improves genericity: computing the statistical moment of order 3 can be computed by simply changing the value of n (which has no impact on running time), while other tools no longer provide an efficient built-in primitive.Similarly, the computation of a window in BeepBeep is done in a very naive way: one instance of the processor given as an argument is created for each window, and the contents of the window are “replayed” to that processor. This is clearly sub-optimal when the function to compute over the window is simple and known in advance. For example, an average can easily be updated in constant time by subtracting the leftmost value leaving the window and adding the rightmost value entering it. However, as we have already discussed, BeepBeep's windows are completely independent from the processor to evaluate, which can be much more complex than the built-in, stateless arithmetical functions provided by other systems.As we said earlier, these empirical results are not intended to be a thorough benchmark of multiple CEP systems. The observations made in this section, however, are sufficient to support two claims:* Some use cases exposed in Section <ref> are difficult (if not impossible) to model using the query language of some commercial-grade CEP tools or RDBMS.* For typical window and pattern queries supported by CEP tools, BeepBeep has a lower, but still reasonable throughput. § CONCLUSIONIn this paper, we have presented a short introduction to the field of Complex Event Processing, and highlighted the differences between classical CEP problems and properties typically considered in Runtime Verification. In particular, we have seen how CEP problems involve intricate computations and transformations over data fields inside events, while runtime monitors are generally more powerful for evaluating properties that relate to the sequencing of events. Moreover, we have presented various use cases taken from existing literature, in which the traditional conception of CEP is extended by new types of events and queries.A review of existing solutions has highlighted many of their useful features, but also numerous shortcomings: complex usage, rigid event structure, limited expressiveness, lack of support for user-defined extensions. These observations motivated the development of BeepBeep, an event stream processing engine that attempts to reconcile CEP and RV by providing a general environment that can accommodate queries and specifications from both. In BeepBeep's generic architecture, basic units of computation called processors can be freely composed to evaluate a wide range of expressions. Given an appropriate toolbox of processors, properties involving extended finite-state machines, temporal logic, aggregation and various other concepts can be evaluated. Moreover, through the modular mechanism of palettes, end users can easily create their own processors, thereby extending the expressiveness of the tool. BeepBeep also proposes its own declarative input language, , which provides an alternative to creating processor chains through “glue” code.Despite our efforts for designing a simple and extensible query language, our experiments revealed that very often, a simple manipulation of processors through a GUI would be a much easier way to write processing chains than large blocks of SQL-like text, irrespective of the actual language used. Consequently, work is planned on developing a simple, Aurora-like box interface for creating and modifying queries.BeepBeep's goal is to occupy a currently vacant niche among event stream processing engines: it lies somewhere in between low-level command line scripts for small trace crunching tasks, on one end, and heavy distributed event processing platforms on the other. The variety of proposed palettes, combined with a simple computational model, makes it suitable for the definition of clean and readable processing chains at an appropriate level of abstraction. While top-notch performance was not the first design goal, an experimental evaluation has shown that reasonable throughput can be achieved for a variety of queries. Rather than try to compete with commercial-grade platforms like Storm or Kinesis, BeepBeep could best be viewed as a toolbox for creating expressive computations within these environments. As a matter of fact, the development of (straightforward) adapters from BeepBeep to these environments is currently under way.Several research problems around BeepBeep's concepts of processors and event streams are also left unexplored. For example, BeepBeep currently does not support lazy evaluation; if the output of an n-ary processor can be determined by looking at fewer than n inputs, all inputs must still be computed and consumed. Implementing lazy evaluation in a stream processing environment could provide some performance benefits, but is considered at the moment as a non-trivial task. In addition, since each processor represents an independent unit of computation communicating through message passing, chains of processors should be easily amenable to parallelization; whether this would bring tangible improvements in terms of throughput is currently unknown. Other straightforward technical improvements, such as the use of the Disruptor data structure in place of queues to improve performance <cit.>, will also be considered.In time, it is hoped that BeepBeep will be adopted as a modular framework under which multiple event processing techniques can be developed and coexist, and that their potential for composition will make the sum greater than its parts. elsarticle-num§ REFERENCES10 url<#>1urlprefixURL href#1#2#2 #1#1Wang2013 F. Wang, C. Zhou, Y. Nie, http://dx.doi.org/10.1007/978-1-4614-6309-2_4Event Processing in Sensor Streams, Springer US, Boston, MA, 2013, pp. 77–102. http://dx.doi.org/10.1007/978-1-4614-6309-2_4 doi:10.1007/978-1-4614-6309-2_4. <http://dx.doi.org/10.1007/978-1-4614-6309-2_4>Jia2009 X. Jia, Y. Wenming, W. Dong, http://doi.acm.org/10.1145/1655925.1656147Complex event processing model for distributed rfid network, in: Proceedings of the 2Nd International Conference on Interaction Sciences: Information Technology, Culture and Human, ICIS '09, ACM, New York, NY, USA, 2009, pp. 1219–1222. http://dx.doi.org/10.1145/1655925.1656147 doi:10.1145/1655925.1656147. <http://doi.acm.org/10.1145/1655925.1656147>DBLP:conf/aaai/HalleGB16 S. Hallé, S. Gaboury, B. Bouchard, http://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12561Activity recognition through complex event processing: First findings, in: B. Bouchard, S. Giroux, A. Bouzouane, S. Gaboury (Eds.), Artificial Intelligence Applied to Assistive Technologies and Smart Environments, Papers from the 2016 AAAI Workshop, Phoenix, Arizona, USA, February 12, 2016, Vol. WS-16-01 of AAAI Workshops, AAAI Press, 2016. <http://www.aaai.org/ocs/index.php/WS/AAAIW16/paper/view/12561>process-mining W. M. P. van der Aalst, Process Mining: Data Science in Action, Springer, 2016.DBLP:conf/icst/CalvarTH12 J. Calvar, R. Tremblay-Lessard, S. Hallé, A runtime monitoring framework for event streams with non-primitive arguments, in: G. Antoniol, A. Bertolino, Y. Labiche (Eds.), ICST, IEEE, 2012, pp. 499–508.DBLP:journals/jlp/LeuckerS09 M. Leucker, C. Schallhart, A brief account of runtime verification, J. Log. Algebr. Program. 78 (5) (2009) 293–303.DBLP:conf/icse/JinMLR12 D. Jin, P. O. Meredith, C. Lee, G. Rosu, JavaMOP: Efficient parametric runtime monitoring framework, in: M. Glinz, G. C. Murphy, M. Pezzè (Eds.), ICSE, IEEE, 2012, pp. 1427–1430.Adi2006 A. Adi, D. Botzer, G. Nechushtai, G. Sharon, http://dx.doi.org/10.1109/SCW.2006.7Complex event processing for financial services, in: Proceedings of the IEEE Services Computing Workshops, SCW '06, IEEE Computer Society, Washington, DC, USA, 2006, pp. 7–12. http://dx.doi.org/10.1109/SCW.2006.7 doi:10.1109/SCW.2006.7. <http://dx.doi.org/10.1109/SCW.2006.7>DBLP:conf/edoc/BerryM13 A. Berry, Z. Milosevic, Real-time analytics for legacy data streams in health: Monitoring health data quality, in: D. Gasevic, M. Hatala, H. R. M. Nezhad, M. Reichert (Eds.), EDOC, IEEE, 2013, pp. 91–100.DBLP:conf/aina/LaFC16 V. H. La, R. A. Fuentes-Samaniego, A. R. Cavalli, http://dx.doi.org/10.1109/AINA.2016.41Network monitoring using MMT: an application based on the user-agent field in HTTP headers, in: L. Barolli, M. Takizawa, T. Enokido, A. J. Jara, Y. Bocchi (Eds.), 30th IEEE International Conference on Advanced Information Networking and Applications, AINA 2016, Crans-Montana, Switzerland, 23-25 March, 2016, IEEE Computer Society, 2016, pp. 147–154. http://dx.doi.org/10.1109/AINA.2016.41 doi:10.1109/AINA.2016.41. <http://dx.doi.org/10.1109/AINA.2016.41>DBLP:books/daglib/0017658 D. C. Luckham, The power of events – An introduction to complex event processing in distributed enterprise systems, ACM, 2005.DBLP:conf/cidr/ChandrasekaranDFHHKMRRS03 S. Chandrasekaran, O. Cooper, A. Deshpande, M. J. Franklin, J. M. Hellerstein, W. Hong, S. Krishnamurthy, S. Madden, V. Raman, F. Reiss, M. A. Shah, TelegraphCQ: Continuous dataflow processing for an uncertain world, in: CIDR, 2003.linq R. Krishnan, J. Goldstein, A. Raizman, http://support.sas.com/documentation/onlinedoc/dfdmstudio/2.4/dfU_ELRG.pdfA hitchhiker's guide to StreamInsight queries, version 2.1 (2012). <http://support.sas.com/documentation/onlinedoc/dfdmstudio/2.4/dfU_ELRG.pdf>DBLP:conf/vldb/CarneyCCCLSSTZ02 D. Carney, U. Çetintemel, M. Cherniack, C. Convey, S. Lee, G. Seidman, M. Stonebraker, N. Tatbul, S. B. Zdonik, http://www.vldb.org/conf/2002/S07P02.pdfMonitoring streams - A new class of data management applications, in: VLDB 2002, Proceedings of 28th International Conference on Very Large Data Bases, August 20-23, 2002, Hong Kong, China, Morgan Kaufmann, 2002, pp. 215–226. <http://www.vldb.org/conf/2002/S07P02.pdf>hl7 J. Rodrigues, Health Information Systems: Concepts, Methodologies, Tools, and Applications, Volume 1, IGI Global, 2010.DBLP:journals/pvldb/WangREW10 D. Wang, E. A. Rundensteiner, R. T. Ellison, H. Wang, http://www.comp.nus.edu.sg/ vldb2010/proceedings/files/papers/D08.pdfActive complex event processing: Applications in real-time health care, PVLDB 3 (2) (2010) 1545–1548. <http://www.comp.nus.edu.sg/ vldb2010/proceedings/files/papers/D08.pdf>DBLP:conf/fm/BarringerFHRR12 H. Barringer, Y. Falcone, K. Havelund, G. Reger, D. E. Rydeheard, http://dx.doi.org/10.1007/978-3-642-32759-9_9Quantified event automata: Towards expressive and efficient runtime monitors, in: D. Giannakopoulou, D. Méry (Eds.), FM 2012: Formal Methods - 18th International Symposium, Paris, France, August 27-31, 2012. Proceedings, Vol. 7436 of Lecture Notes in Computer Science, Springer, 2012, pp. 68–84. http://dx.doi.org/10.1007/978-3-642-32759-9_9 doi:10.1007/978-3-642-32759-9_9. <http://dx.doi.org/10.1007/978-3-642-32759-9_9>ZE11 M. Zeifman, K. Roth, http://dx.doi.org/10.1109/TCE.2011.5735484Nonintrusive appliance load monitoring: Review and outlook, IEEE Trans. Consumer Electronics 57 (1) (2011) 76–84. http://dx.doi.org/10.1109/TCE.2011.5735484 doi:10.1109/TCE.2011.5735484. <http://dx.doi.org/10.1109/TCE.2011.5735484>nous-acm-cie S. Varvaressos, K. Lavoie, S. Gaboury, S. Hallé, Automated bug finding in video games: A case study for runtime monitoring, ACM Computers in Entertainment 15 (1) (2017) 1. http://dx.doi.org/http://dx.doi.org/10.1145/2700529 doi:http://dx.doi.org/10.1145/2700529.DBLP:journals/sigmod/MarcusBBKMM11 A. Marcus, M. S. Bernstein, O. Badar, D. R. Karger, S. Madden, R. C. Miller, http://doi.acm.org/10.1145/2094114.2094120Processing and visualizing the data in tweets, SIGMOD Record 40 (4) (2011) 21–27. http://dx.doi.org/10.1145/2094114.2094120 doi:10.1145/2094114.2094120. <http://doi.acm.org/10.1145/2094114.2094120>DBLP:conf/debs/Kumaran13 V. Kumaran, http://doi.acm.org/10.1145/2488222.2488276Event stream database based architecture to detect network intrusion: (industry article), in: S. Chakravarthy, S. D. Urban, P. R. Pietzuch, E. A. Rundensteiner (Eds.), The 7th ACM International Conference on Distributed Event-Based Systems, DEBS '13, Arlington, TX, USA - June 29 - July 03, 2013, ACM, 2013, pp. 241–248. http://dx.doi.org/10.1145/2488222.2488276 doi:10.1145/2488222.2488276. <http://doi.acm.org/10.1145/2488222.2488276>DBLP:conf/nfm/PielBLBK16 A. Piel, J. Bourrely, S. Lala, S. Bertrand, R. Kervarc, http://dx.doi.org/10.1007/978-3-319-40648-0_1Temporal logic framework for performance analysis of architectures of systems, in: S. Rayadurgam, O. Tkachuk (Eds.), NASA Formal Methods - 8th International Symposium, NFM 2016, Minneapolis, MN, USA, June 7-9, 2016, Proceedings, Vol. 9690 of Lecture Notes in Computer Science, Springer, 2016, pp. 3–18. http://dx.doi.org/10.1007/978-3-319-40648-0_1 doi:10.1007/978-3-319-40648-0_1. <http://dx.doi.org/10.1007/978-3-319-40648-0_1>Balis:2011:RGM:2009738.2009875 B. Balis, B. Kowalewski, M. Bubak, http://dx.doi.org/10.1016/j.future.2011.04.005Real-time grid monitoring based on complex event processing, Future Gener. Comput. Syst. 27 (8) (2011) 1103–1112. http://dx.doi.org/10.1016/j.future.2011.04.005 doi:10.1016/j.future.2011.04.005. <http://dx.doi.org/10.1016/j.future.2011.04.005>DBLP:journals/fmsd/KimVKLS04 M. Kim, M. Viswanathan, S. Kannan, I. Lee, O. Sokolsky, Java-MaC: A run-time assurance approach for Java programs, Formal Methods in System Design 24 (2) (2004) 129–155.DBLP:conf/ftscs/HavelundJ14 K. Havelund, R. Joshi, http://dx.doi.org/10.1007/978-3-319-17581-2_1Experience with rule-based analysis of spacecraft logs, in: C. Artho, P. C. Ölveczky (Eds.), Formal Techniques for Safety-Critical Systems - Third International Workshop, FTSCS 2014, Luxembourg, November 6-7, 2014. Revised Selected Papers, Vol. 476 of Communications in Computer and Information Science, Springer, 2014, pp. 1–16. http://dx.doi.org/10.1007/978-3-319-17581-2_1 doi:10.1007/978-3-319-17581-2_1. <http://dx.doi.org/10.1007/978-3-319-17581-2_1>DBLP:journals/tsc/HalleV12 S. Hallé, R. Villemaire, Runtime enforcement of web service message contracts with data, IEEE T. Services Computing 5 (2) (2012) 192–206.DBLP:journals/fmsd/BasinKMZ15 D. A. Basin, F. Klaedtke, S. Marinovic, E. Zalinescu, http://dx.doi.org/10.1007/s10703-015-0222-7Monitoring of temporal first-order properties with aggregations, Formal Methods in System Design 46 (3) (2015) 262–285. http://dx.doi.org/10.1007/s10703-015-0222-7 doi:10.1007/s10703-015-0222-7. <http://dx.doi.org/10.1007/s10703-015-0222-7>DBLP:conf/isola/KhouryHW16 R. Khoury, S. Hallé, O. Waldmann, http://dx.doi.org/10.1007/978-3-319-47169-3_26Execution trace analysis using LTL-FO +, in: T. Margaria, B. Steffen (Eds.), Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications - 7th International Symposium, ISoLA 2016, Imperial, Corfu, Greece, October 10-14, 2016, Proceedings, Part II, Vol. 9953 of Lecture Notes in Computer Science, 2016, pp. 356–362. http://dx.doi.org/10.1007/978-3-319-47169-3_26 doi:10.1007/978-3-319-47169-3_26. <http://dx.doi.org/10.1007/978-3-319-47169-3_26>DBLP:journals/csur/CugolaM12 G. Cugola, A. Margara, http://doi.acm.org/10.1145/2187671.2187677Processing flows of information: From data stream to complex event processing, ACM Comput. Surv. 44 (3) (2012) 15:1–15:62. http://dx.doi.org/10.1145/2187671.2187677 doi:10.1145/2187671.2187677. <http://doi.acm.org/10.1145/2187671.2187677>DBLP:conf/cidr/AbadiABCCHLMRRTXZ05 D. J. Abadi, Y. Ahmad, M. Balazinska, U. Çetintemel, M. Cherniack, J.-H. Hwang, W. Lindner, A. Maskey, A. Rasin, E. Ryvkina, N. Tatbul, Y. Xing, S. B. Zdonik, The design of the Borealis stream processing engine, in: CIDR, 2005, pp. 277–289.DBLP:conf/sigmod/WuDR06 E. Wu, Y. Diao, S. Rizvi, High-performance complex event processing over streams, in: S. Chaudhuri, V. Hristidis, N. Polyzotis (Eds.), SIGMOD Conference, ACM, 2006, pp. 407–418.DBLP:conf/debs/BrennaGHJ09 L. Brenna, J. Gehrke, M. Hong, D. Johansen, Distributed event stream processing with non-deterministic finite automata, in: A. S. Gokhale, D. C. Schmidt (Eds.), DEBS, ACM, 2009.DBLP:conf/cidr/DemersGPRSW07 A. J. Demers, J. Gehrke, B. Panda, M. Riedewald, V. Sharma, W. M. White, http://www.cidrdb.org/cidr2007/papers/cidr07p47.pdfCayuga: A general purpose event monitoring system, in: CIDR 2007, Third Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 7-10, 2007, Online Proceedings, www.cidrdb.org, 2007, pp. 412–422. <http://www.cidrdb.org/cidr2007/papers/cidr07p47.pdf>DBLP:conf/sc/SuhothayanGNCPN11 S. Suhothayan, K. Gajasinghe, I. L. Narangoda, S. Chaturanga, S. Perera, V. Nanayakkara, http://doi.acm.org/10.1145/2110486.2110493Siddhi: a second look at complex event processing architectures, in: R. Dooley, S. Fiore, M. L. Green, C. Kiddle, S. Marru, M. E. Pierce, M. Thomas, N. Wilkins-Diehr (Eds.), Proceedings of the 2011 ACM SC Workshop on Gateway Computing Environments, GCE 2011, Seattle, WA, USA, November 18, 2011, ACM, 2011, pp. 43–50. http://dx.doi.org/10.1145/2110486.2110493 doi:10.1145/2110486.2110493. <http://doi.acm.org/10.1145/2110486.2110493>samza Apache Foundation, Apache Samza, <http://samza.apache.org>, retrieved February 14th, 2017 (2017).DBLP:conf/icdm/NeumeyerRNK10 L. Neumeyer, B. Robbins, A. Nair, A. Kesari, http://dx.doi.org/10.1109/ICDMW.2010.172S4: Distributed stream computing platform, in: W. Fan, W. Hsu, G. I. Webb, B. Liu, C. Zhang, D. Gunopulos, X. Wu (Eds.), ICDMW 2010, The 10th IEEE International Conference on Data Mining Workshops, Sydney, Australia, 13 December 2010, IEEE Computer Society, 2010, pp. 170–177. http://dx.doi.org/10.1109/ICDMW.2010.172 doi:10.1109/ICDMW.2010.172. <http://dx.doi.org/10.1109/ICDMW.2010.172>DBLP:journals/cacm/ZahariaXWDADMRV16 M. Zaharia, R. S. Xin, P. Wendell, T. Das, M. Armbrust, A. Dave, X. Meng, J. Rosen, S. Venkataraman, M. J. Franklin, A. Ghodsi, J. Gonzalez, S. Shenker, I. Stoica, http://doi.acm.org/10.1145/2934664Apache spark: a unified engine for big data processing, Commun. ACM 59 (11) (2016) 56–65. http://dx.doi.org/10.1145/2934664 doi:10.1145/2934664. <http://doi.acm.org/10.1145/2934664>storm Apache Foundation, Apache Storm, <http://storm.apache.org>, retrieved February 14th, 2017 (2017).trident-api Apache Foundation, Trident API overview, <https://storm.apache.org/releases/current/Trident-API-Overview.html>, retrieved February 14th, 2017 (2017).DBLP:conf/debs/KochKR10 G. G. Koch, B. Koldehofe, K. Rothermel, http://doi.acm.org/10.1145/1827418.1827424Cordies: expressive event correlation in distributed systems, in: J. Bacon, P. R. Pietzuch, J. Sventek, U. Çetintemel (Eds.), Proceedings of the Fourth ACM International Conference on Distributed Event-Based Systems, DEBS 2010, Cambridge, United Kingdom, July 12-15, 2010, ACM, 2010, pp. 26–37. http://dx.doi.org/10.1145/1827418.1827424 doi:10.1145/1827418.1827424. <http://doi.acm.org/10.1145/1827418.1827424>DBLP:journals/vldb/AlexandrovBEFHHKLLMNPRSSHTW14 A. Alexandrov, R. Bergmann, S. Ewen, J. Freytag, F. Hueske, A. Heise, O. Kao, M. Leich, U. Leser, V. Markl, F. Naumann, M. Peters, A. Rheinländer, M. J. Sax, S. Schelter, M. Höger, K. Tzoumas, D. Warneke, http://dx.doi.org/10.1007/s00778-014-0357-yThe stratosphere platform for big data analytics, VLDB J. 23 (6) (2014) 939–964. http://dx.doi.org/10.1007/s00778-014-0357-y doi:10.1007/s00778-014-0357-y. <http://dx.doi.org/10.1007/s00778-014-0357-y>logcep J. Cao, X. Wei, Y. Liu, D. Mao, Q. Cai, LogCEP – complex event processing based on pushdown automaton, Int. Journal of Hybrid Information Technology (2014) 71–82http://dx.doi.org/10.14257/ijhit.2014.7.6.06 doi:10.14257/ijhit.2014.7.6.06.DBLP:conf/edoc/DijkmanPH16 R. M. Dijkman, S. P. Peters, A. M. ter Hofstede, A toolkit for streaming process data analysis, in: S. Rinderle-Ma, F. Matthes, J. Mendling (Eds.), EDOC, IEEE, 2016, pp. 304–312.DBLP:conf/cidr/MotwaniWABBDMORV03 R. Motwani, J. Widom, A. Arasu, B. Babcock, S. Babu, M. Datar, G. S. Manku, C. Olston, J. Rosenstein, R. Varma, http://www-db.cs.wisc.edu/cidr/cidr2003/program/p22.pdfQuery processing, approximation, and resource management in a data stream management system, in: CIDR 2003, First Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 5-8, 2003, Online Proceedings, www.cidrdb.org, 2003. <http://www-db.cs.wisc.edu/cidr/cidr2003/program/p22.pdf>DBLP:conf/sigmod/SeshadriLR94 P. Seshadri, M. Livny, R. Ramakrishnan, http://doi.acm.org/10.1145/191839.191926Sequence query processing, in: R. T. Snodgrass, M. Winslett (Eds.), Proceedings of the 1994 ACM SIGMOD International Conference on Management of Data, Minneapolis, Minnesota, May 24-27, 1994., ACM Press, 1994, pp. 430–441. http://dx.doi.org/10.1145/191839.191926 doi:10.1145/191839.191926. <http://doi.acm.org/10.1145/191839.191926>Arasu:ilprints641 A. Arasu, B. Babcock, S. Babu, J. Cieslewicz, M. Datar, K. Ito, R. Motwani, U. Srivastava, J. Widom, http://ilpubs.stanford.edu:8090/641/Stream: The stanford data stream management system, Technical Report 2004-20, Stanford InfoLab (2004). <http://ilpubs.stanford.edu:8090/641/>streambase http://streambase.comStreamBase SQL (2014). <http://streambase.com>DBLP:conf/rv/Halle16 S. Hallé, http://dx.doi.org/10.1007/978-3-319-46982-9_6When RV meets CEP, in: Falcone and Sánchez<cit.>, pp. 68–91. http://dx.doi.org/10.1007/978-3-319-46982-9_6 doi:10.1007/978-3-319-46982-9_6. <http://dx.doi.org/10.1007/978-3-319-46982-9_6>DBLP:conf/time/DAngeloSSRFSMM05 B. D'Angelo, S. Sankaranarayanan, C. Sánchez, W. Robinson, B. Finkbeiner, H. B. Sipma, S. Mehrotra, Z. Manna, http://dx.doi.org/10.1109/TIME.2005.26LOLA: runtime monitoring of synchronous systems, in: 12th International Symposium on Temporal Representation and Reasoning (TIME 2005), 23-25 June 2005, Burlington, Vermont, USA, IEEE Computer Society, 2005, pp. 166–174. http://dx.doi.org/10.1109/TIME.2005.26 doi:10.1109/TIME.2005.26. <http://dx.doi.org/10.1109/TIME.2005.26>DBLP:journals/tse/HalbwachsLR92 N. Halbwachs, F. Lagnier, C. Ratel, Programming and verifying real-time systems by means of the synchronous data-flow language LUSTRE, IEEE Trans. Software Eng. 18 (9) (1992) 785–793.chen-jin-meredith-rosu-2009-icicis F. Chen, D. Jin, P. Meredith, G. Roşu, Monitoring oriented programming - a project overview, in: Proceedings of the Fourth International Conference on Intelligent Computing and Information Systems (ICICIS'09), ACM, 2009, pp. 72–77.DBLP:journals/csur/Kiczales96 G. Kiczales, http://doi.acm.org/10.1145/242224.242420Aspect-oriented programming, ACM Comput. Surv. 28 (4es) (1996) 154. http://dx.doi.org/10.1145/242224.242420 doi:10.1145/242224.242420. <http://doi.acm.org/10.1145/242224.242420>CPS09larva C. Colombo, G. J. Pace, G. Schneider, LARVA — safer monitoring of real-time Java programs (tool paper), in: Seventh IEEE International Conference on Software Engineering and Formal Methods (SEFM), IEEE Computer Society, 2009, pp. 33–37.DBLP:conf/hybrid/BengtssonLLPY95 J. Bengtsson, K. G. Larsen, F. Larsson, P. Pettersson, W. Yi, http://dx.doi.org/10.1007/BFb0020949UPPAAL - a tool suite for automatic verification of real-time systems, in: R. Alur, T. A. Henzinger, E. D. Sontag (Eds.), Hybrid Systems III: Verification and Control, Proceedings of the DIMACS/SYCON Workshop, October 22-25, 1995, Ruttgers University, New Brunswick, NJ, USA, Vol. 1066 of Lecture Notes in Computer Science, Springer, 1995, pp. 232–243. http://dx.doi.org/10.1007/BFb0020949 doi:10.1007/BFb0020949. <http://dx.doi.org/10.1007/BFb0020949>DBLP:conf/rv/ColomboGP10 C. Colombo, A. Gauci, G. J. Pace, http://dx.doi.org/10.1007/978-3-642-16612-9_38Larvastat: Monitoring of statistical properties, in: H. Barringer, Y. Falcone, B. Finkbeiner, K. Havelund, I. Lee, G. J. Pace, G. Rosu, O. Sokolsky, N. Tillmann (Eds.), Runtime Verification - First International Conference, RV 2010, St. Julians, Malta, November 1-4, 2010. Proceedings, Vol. 6418 of Lecture Notes in Computer Science, Springer, 2010, pp. 480–484. http://dx.doi.org/10.1007/978-3-642-16612-9_38 doi:10.1007/978-3-642-16612-9_38. <http://dx.doi.org/10.1007/978-3-642-16612-9_38>DBLP:conf/tacas/RegerCR15 G. Reger, H. C. Cruz, D. E. Rydeheard, http://dx.doi.org/10.1007/978-3-662-46681-0_55MarQ: Monitoring at runtime with QEA, in: C. Baier, C. Tinelli (Eds.), Tools and Algorithms for the Construction and Analysis of Systems - 21st International Conference, TACAS 2015, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, April 11-18, 2015. Proceedings, Vol. 9035 of Lecture Notes in Computer Science, Springer, 2015, pp. 596–610. http://dx.doi.org/10.1007/978-3-662-46681-0_55 doi:10.1007/978-3-662-46681-0_55. <http://dx.doi.org/10.1007/978-3-662-46681-0_55>DBLP:conf/tacas/ChenR09 F. Chen, G. Rosu, http://dx.doi.org/10.1007/978-3-642-00768-2_23Parametric trace slicing and monitoring, in: S. Kowalewski, A. Philippou (Eds.), Tools and Algorithms for the Construction and Analysis of Systems, 15th International Conference, TACAS 2009, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings, Vol. 5505 of Lecture Notes in Computer Science, Springer, 2009, pp. 246–261. http://dx.doi.org/10.1007/978-3-642-00768-2_23 doi:10.1007/978-3-642-00768-2_23. <http://dx.doi.org/10.1007/978-3-642-00768-2_23>DBLP:conf/rv/RegerHF16 G. Reger, S. Hallé, Y. Falcone, http://dx.doi.org/10.1007/978-3-319-46982-9_3Third international competition on runtime verification - CRV 2016, in: Falcone and Sánchez<cit.>, pp. 21–37. http://dx.doi.org/10.1007/978-3-319-46982-9_3 doi:10.1007/978-3-319-46982-9_3. <http://dx.doi.org/10.1007/978-3-319-46982-9_3>DBLP:conf/rv/Havelund13 K. Havelund, A Scala DSL for Rete-based runtime verification, in: A. Legay, S. Bensalem (Eds.), RV, Vol. 8174 of Lecture Notes in Computer Science, Springer, 2013, pp. 322–327.DBLP:journals/ai/Forgy82 C. Forgy, http://dx.doi.org/10.1016/0004-3702(82)90020-0Rete: A fast algorithm for the many patterns/many objects match problem, Artif. Intell. 19 (1) (1982) 17–37. http://dx.doi.org/10.1016/0004-3702(82)90020-0 doi:10.1016/0004-3702(82)90020-0. <http://dx.doi.org/10.1016/0004-3702(82)90020-0>DBLP:journals/jss/CugolaM12 G. Cugola, A. Margara, http://dx.doi.org/10.1016/j.jss.2012.03.056Complex event processing with T-REX, Journal of Systems and Software 85 (8) (2012) 1709–1728. http://dx.doi.org/10.1016/j.jss.2012.03.056 doi:10.1016/j.jss.2012.03.056. <http://dx.doi.org/10.1016/j.jss.2012.03.056>DBLP:journals/entcs/StolzB06 V. Stolz, E. Bodden, Temporal assertions using AspectJ, Electr. Notes Theor. Comput. Sci. 144 (4) (2006) 109–124.DBLP:conf/tacas/DeckerHS0T16 N. Decker, J. Harder, T. Scheffel, M. Schmitz, D. Thoma, http://dx.doi.org/10.1007/978-3-662-49674-9_54Runtime monitoring with union-find structures, in: M. Chechik, J. Raskin (Eds.), Tools and Algorithms for the Construction and Analysis of Systems - 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings, Vol. 9636 of Lecture Notes in Computer Science, Springer, 2016, pp. 868–884. http://dx.doi.org/10.1007/978-3-662-49674-9_54 doi:10.1007/978-3-662-49674-9_54. <http://dx.doi.org/10.1007/978-3-662-49674-9_54>DBLP:conf/sp/ErlingssonS00 Ú. Erlingsson, F. B. Schneider, IRM enforcement of Java stack inspection, in: IEEE Symposium on Security and Privacy, 2000, pp. 246–255.DBLP:conf/oopsla/MartinLL05 M. C. Martin, V. B. Livshits, M. S. Lam, Finding application errors and security flaws using PQL: a program query language, in: OOPSLA, 2005, pp. 365–383.DBLP:conf/oopsla/GoldsmithOA05 S. Goldsmith, R. O'Callahan, A. Aiken, Relational queries over program traces, in: OOPSLA, 2005, pp. 385–402.DBLP:journals/logcom/BarringerRH10 H. Barringer, D. E. Rydeheard, K. Havelund, Rule systems for run-time monitoring: from Eagle to RuleR, J. Log. Comput. 20 (3) (2010) 675–706.DBLP:conf/spin/GaravelM04 H. Garavel, R. Mateescu, SEQ.OPEN: A tool for efficient trace-based verification, in: S. Graf, L. Mounier (Eds.), SPIN, Vol. 2989 of Lecture Notes in Computer Science, Springer, 2004, pp. 151–157.DBLP:conf/pldi/HamlenJ08 K. W. Hamlen, M. Jones, Aspect-oriented in-lined reference monitors, in: Ú. Erlingsson, M. Pistoia (Eds.), PLAS, ACM, 2008, pp. 11–20.DBLP:journals/logcom/BoddenHLLN10 E. Bodden, L. J. Hendren, P. Lam, O. Lhoták, N. A. Naeem, Collaborative runtime verification with Tracematches, J. Log. Comput. 20 (3) (2010) 707–723.DBLP:journals/sigmod/DarwenD95 H. Darwen, C. J. Date, The third manifesto, SIGMOD Record 24 (1) (1995) 39–49.DBLP:conf/sigmod/BrennaDGHOPRTW07 L. Brenna, A. J. Demers, J. Gehrke, M. Hong, J. Ossher, B. Panda, M. Riedewald, M. Thatte, W. M. White, http://doi.acm.org/10.1145/1247480.1247620Cayuga: a high-performance event processing engine, in: C. Y. Chan, B. C. Ooi, A. Zhou (Eds.), Proceedings of the ACM SIGMOD International Conference on Management of Data, Beijing, China, June 12-14, 2007, ACM, 2007, pp. 1100–1102. http://dx.doi.org/10.1145/1247480.1247620 doi:10.1145/1247480.1247620. <http://doi.acm.org/10.1145/1247480.1247620>DBLP:journals/fmsd/FinkbeinerSS05 B. Finkbeiner, S. Sankaranarayanan, H. Sipma, http://dx.doi.org/10.1007/s10703-005-3399-3Collecting statistics over runtime executions, Formal Methods in System Design 27 (3) (2005) 253–274. http://dx.doi.org/10.1007/s10703-005-3399-3 doi:10.1007/s10703-005-3399-3. <http://dx.doi.org/10.1007/s10703-005-3399-3>hw-s4 Apache Foundation, S4: Walkthrough, <https://incubator.apache.org/s4/doc/0.6.0/walkthrough/>, retrieved February 14th, 2017 (2017).zuho T. Zuho, https://www.entrepreneur.com/article/273561`big data' is no longer enough: It's now all about `fast data', retrieved February 11th, 2017 (2016). <https://www.entrepreneur.com/article/273561>Quine45 W. V. O. Quine, On the logic of quantification, The Journal of Symbolic Logic 10 (1) (1945) 1–12.sax http://docs.oracle.com/javaee/1.4/tutorial/doc/JAXPSAX.htmlSimple API for XML, retrieved December 13th, 2013 (2013). <http://docs.oracle.com/javaee/1.4/tutorial/doc/JAXPSAX.html>stax C. Fry, D. Sagar, https://www.jcp.org/aboutJava/communityprocess/final/jsr173/Streaming API for XML, JSR 173 specification (2003). <https://www.jcp.org/aboutJava/communityprocess/final/jsr173/>DBLP:conf/ximep/FegarasDW06 L. Fegaras, R. K. Dash, Y. Wang, A fully pipelined XQuery processor, in: XIME-P, 2006.DBLP:conf/sac/HalleV09 S. Hallé, R. Villemaire, Runtime monitoring of web service choreographies using streaming XML, in: S. Y. Shin, S. Ossowski (Eds.), SAC, ACM, 2009, pp. 2118–2125.DBLP:journals/fmsd/FalconeMFR11 Y. Falcone, L. Mounier, J. Fernandez, J. Richier, http://dx.doi.org/10.1007/s10703-011-0114-4Runtime enforcement monitors: composition, synthesis, and enforcement abilities, Formal Methods in System Design 38 (3) (2011) 223–262. http://dx.doi.org/10.1007/s10703-011-0114-4 doi:10.1007/s10703-011-0114-4. <http://dx.doi.org/10.1007/s10703-011-0114-4>DBLP:conf/dac/ChengK93 K. Cheng, A. S. Krishnakumar, http://doi.acm.org/10.1145/157485.164585Automatic functional test generation using the extended finite state machine model, in: DAC, 1993, pp. 86–91. http://dx.doi.org/10.1145/157485.164585 doi:10.1145/157485.164585. <http://doi.acm.org/10.1145/157485.164585>riot A. Shukla, S. Chaturvedi, Y. Simmhan, RIoTBench: A real-time IoT benchmark for distributed stream processing platforms, Tech. Rep. arXiv:1606.07621 (2016).DBLP:conf/dagstuhl/Bizarro07 P. Bizarro, http://drops.dagstuhl.de/opus/volltexte/2007/1143BiCEP - benchmarking complex event processing systems, in: K. M. Chandy, O. Etzion, R. von Ammon (Eds.), Event Processing, 6.5. - 11.5.2007, Vol. 07191 of Dagstuhl Seminar Proceedings, Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany, 2007. <http://drops.dagstuhl.de/opus/volltexte/2007/1143>DBLP:conf/rv/MradAHB12 A. Mrad, S. Ahmed, S. Hallé, É. Beaudet, http://dx.doi.org/10.1007/978-3-642-35632-2_14Babeltrace: A collection of transducers for trace validation, in: S. Qadeer, S. Tasiran (Eds.), Runtime Verification, Third International Conference, RV 2012, Istanbul, Turkey, September 25-28, 2012, Revised Selected Papers, Vol. 7687 of Lecture Notes in Computer Science, Springer, 2012, pp. 126–130. http://dx.doi.org/10.1007/978-3-642-35632-2_14 doi:10.1007/978-3-642-35632-2_14. <http://dx.doi.org/10.1007/978-3-642-35632-2_14>disruptor M. Thompson, D. Farley, M. Barker, P. Gee, A. Stewart, http://code.google.com/p/disruptor/Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads, Tech. rep. (May 2011). <http://code.google.com/p/disruptor/>DBLP:conf/rv/2016 Y. Falcone, C. Sánchez (Eds.), http://dx.doi.org/10.1007/978-3-319-46982-9Runtime Verification - 16th International Conference, RV 2016, Madrid, Spain, September 23-30, 2016, Proceedings, Vol. 10012 of Lecture Notes in Computer Science, Springer, 2016. http://dx.doi.org/10.1007/978-3-319-46982-9 doi:10.1007/978-3-319-46982-9. <http://dx.doi.org/10.1007/978-3-319-46982-9>
http://arxiv.org/abs/1702.08051v1
{ "authors": [ "Sylvain Hallé" ], "categories": [ "cs.DB" ], "primary_category": "cs.DB", "published": "20170226165128", "title": "From Complex Event Processing to Simple Event Processing" }
-2.5 true cm 25 true cm18 true cm-1.1 true cm-1.1 true cm =300 =2.pt plain arabic equationsection
http://arxiv.org/abs/1702.08218v2
{ "authors": [ "A. A. Sharapov", "E. D. Skvortsov" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170227101050", "title": "Formal Higher-Spin Theories and Kontsevich-Shoikhet-Tsygan Formality" }
Invincea Inc.josh.saxe@invincea.comInvincea Inc.kberlin@invincea.com eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys Joshua Saxe1 and Konstantin Berlin2 December 30, 2023 ================================================================================================================================= For years security machine learning research has promised to obviate the need for signature based detection by automatically learning to detect indicators of attack.Unfortunately, this vision hasn't come to fruition: in fact, developing and maintaining today's security machine learning systems can requireengineering resources that are comparable to that of signature-based detection systems, due in part to the need to develop and continuously tune the “features” these machine learning systems look at as attacks evolve.Deep learning, a subfield of machine learning, promises to change this by operating on raw input signals and automating the process of feature design and extraction.In this paper we propose the eXpose neural network, which uses a deep learning approach we have developed to take generic, raw short character strings as input (a common case for security inputs, which include artifacts like potentially malicious URLs, file paths, named pipes, named mutexes, and registry keys), and learns to simultaneously extract features and classify using character-level embeddings and convolutional neural network.In addition to completely automating the feature design and extraction process, eXpose outperforms manual feature extraction based baselines on all of the intrusion detection problems we tested it on, yielding a 5%-10% detection rate gain at 0.1% false positive rate compared to these baselines. § INTRODUCTION While for over a decade researchers have proposed systems that apply machine learning methods to computer security detection problems, this research has gained only limited prevalence in real-world security systems, in part, we believe, because machine learning systems require significant expert effort to develop and maintain.For example, development of machine learning based security detection systems requires an in-depth exploration of the feature representation of a given security artifact type (e.g. Windows PE binaries, URLs, or behavioral traces), and an exploration of what machine learning detection approaches yield the best accuracy given those representations.As cyber-attacks evolve, machine learning feature representations must be updated to keep pace with the latest cyber threats.The calculation of many computer security products companies is often that signature based systems are thus a less risky investment.While many technical problems stand in the way of effective deployment of machine learning systems (e.g. the collection of large volumes of labeled training data, the problem of evaluating these systems when attacker behavior is constantly changing, and the problem of deploying complex models of low-resource endpoints), one way to reduce the cost of creating and maintaining machine learning approaches is to move beyond manual feature engineering, given that feature engineering is often recognized as the most time consuming aspect of machine learning system development.Deep learning, a subfield of machine learning that utilizes neural networks operating directly on raw inputs, promises to allow us to do this.In line with this vision, we present eXpose, a deep learning approach to a number of security detection problems, that directly works on raw inputs to detect maliciousness. Specifically, eXpose takes generic short character strings as its input and learns to detect whether they are indicators of malicious behavior based on their lexical semantics.In this paper, we demonstrate eXpose's ability to detect malicious URLs, malicious file paths, and malicious registry keys.To make our research objectives clear, below are examples of these data, starting with malicious URLs (we've substituted URL forward slashes for backslashes to avoid accidental clicks):Next, a few examples of malicious file paths:Finally, a few examples of malicious registry keys: All of these examples appear malicious, or at least suspicious, to the expert eye, leading us to hypothesize that a machine learning system could also infer their maliciousness. It might even be possible to exceed human expert's ability to guess whether these artifacts are malicious, by learning to recognize generalized deceptive patterns observed over tens of millions of malicious artifacts.And indeed, on all of the intrusion detection problems we tested, eXpose outperformed manual feature extraction based machine learning baselines, yielding a 5%-10% higher detection rate at deployment relevant false positive rates.Our research demonstrates the potential deep learning methods hold solving hard security detection problems.The rest of this paper is structured as follows. In Section <ref> we describe related work. In Section <ref> we motivate and describe our eXpose approach, including an exact and reproducible description of our system. Section <ref> describes our evaluation methodology and results.Finally, in section <ref> we sum up the paper and discuss directions for future work. § PREVIOUS WORK §.§ Related work in computer security We designed eXpose as a fairly generic detection tool that simultaneously addresses a number of cybersecurity problems. This is somewhat different from previous work, which tends to focus on individual security detection problems, such as identifying malicious URLs or malicious host-based behavior individually.In this section we place our work in conversation with current cybersecurity literature and describe its relationship to the broader deep learning literature.A number of previous works in machine learned based behavioral detection of malware is related to automatic classification of individual file paths or registry keys.In general, previous behavioral malware detection methods have focused on making detections on the basis of sequences of observed process or operating system-level events.For example, <cit.> proposes a logistic regression-based method for detecting malware infections based on n-grams of audit log event observations.Relatedly, <cit.> proposes to use an anomaly detection approach on sequences of registry accesses to infer whether a host has been compromised.<cit.> surveys a wide variety of behavioral malware detection techniques, all of which perform manual feature engineering on collections of events to infer whether or not dynamically executed binaries are malicious or benign.Unlike the work summarized above, which operates on groups of dynamic host-based observations to detect malware, eXpose operates on individual events, but rather than modeling individual host-based events using manually defined feature representations, eXpose learns representations of input strings (e.g. file paths and registry keys) as part of its overall process of learning to make accurate detections of malicious behavior.We thus think of eXpose as providing a complementary and orthogonal detection capability relative to theseresearch efforts.Unlike individual file and registry writes, identifying malicious URLs is a more studied problem in the security detection literature. Proposed malicious URL detection approaches have tended to either exclusively use URL strings as their input or utilize both URL strings and supplementary information like website registration services, website content, and network reputation <cit.>.In contrast to work that uses both input URLs and auxiliary information to detect malicious URLs, our work relies solely on URL input strings, making it easier to deploy.With respect to the detection mechanism used in previous URL detection work, the simplest proposed approaches have involved blacklists, which can be collected using manual labeling, user feedback, client honeypots, and other heuristics <cit.>. While blacklists have a very low false positive rate, they are also very brittle and thus cannot generalize to previously unseen URL strings <cit.>. To address these limitations, statistical approaches, such as machine-learning or similarity based URL detection have been proposed <cit.>.Unfortunately, manually discovering potentially useful features is time consuming and requires constant adaptation to evolving obfuscation techniques, which limits the achievable accuracy of the detectors.In contrast to work that requires manual feature extraction from URLs to make detections, our work automates this feature extraction process. §.§ Machine Learning §.§.§ Convolutional Neural Networks eXpose uses neural network convolutional kernels as part of its approach to automating feature engineering and extraction, and so in addition to computer security literature focused on detecting cyber attacks, our work is also related to the convolutional neural network and recurrent neural network literature in machine learning.Convolutional Neural Networks (CNN) <cit.> have been applied to image recognition problems for a long time, but only fairly recently have they demonstrated breakthrough results in image recognition <cit.>. The advantage of CNNs over previous approaches is that they work directly on the raw pixel data, thus eliminating tedious and fairly limited hand designing of features. What makes CNNs particularly powerful for images is that they are able to efficiently exploit information locality by applying convolutional operations on a raw data using a set of different kernels. These kernels are learned jointly with the entire network, and thus are better able to adapt to learning objectives than hand designed designed kernels. Since the same kernel is applied to every pixel of the image, there is a tremendous reduction in parameters that need to be learned, as compared to a fully dense neural network.In addition to targeting image inputs, CNNs have also found rich applications within natural language processing, where they are typically applied to subsequences of words or patterns such that they perform pattern matching on sequential patterns within input texts.For example, <cit.> proposes to combine a word embedding approach with a convolutional neural network to perform sentiment analysis on Twitter data.Other authors propose machine learning approaches that operate on character level embeddings <cit.>. The advantage of such approaches is that they do not require syntactical understanding of the language, such as word boundaries or punctuation. This work is closely to related to our own, since we also model text strings at the character level, embedding them in an embedding space and then extracting features using convolutions.To our knowledge, our approach is the first computer security detector to take this approach or any approach that extracts features from raw inputs.§.§.§ Recurrent Neural Networks RNNs are commonly used to process sequential information, with LSTM based approaches being among the more popular <cit.>. While theoretically they are able to learn long terms dependencies in a sequences, RNNs are problematic to train due to the vanishing gradient problem as well as large computational cost because of the need to sequentially back and forward propagate information during training and prediction phases <cit.>. Recently CNN have been shown to be just (if not more) effective on sequential data oriented modeling problems, while allowing a significantly faster model training and prediction evaluation <cit.>.§.§.§ Multi-task Learning Multi-task learning using neural networks is a somewhat related idea to our generalized CNN model. There, a set of network layers is shared between various related learning tasks, and an individualized set of final layers is used to make the final prediction on the specific task <cit.>. Sharing internal weights potentially allows the network to learn a richer representation of the data, which is especially useful when individual datasets are very small <cit.>. Another advantage of multi-task learning is if a set of related classifiers needs to be deployed to an endpoint, weight-sharing provides a smaller deployment footprint.Multi-task learning has also been suggested for malware detection by simultaneously trying to predict binary classification as well as malware family <cit.>. However, multi-task learning does not directly map to our set of detection problems, where the semantic meaning of the characters and substrings significantly changes between problems. Furthermore, our labeled dataset is of virtually unlimited size, making training on combined datasets less useful.§ METHODeXpose is built on the premise that applying a neural network directly to the raw input of short character strings provides better classification accuracy than previous approaches that rely on hand-designed features. In this section we describe how we architected our neural network to operate directly on raw character input, and the intuition behind our decisions. Our network was implemented in Python 2.7 using Keras v1.1 <cit.>. §.§ ArchitectureFig. <ref> gives an intuitive overview of our approach, showing that our neural network is divided into three notional components [What we mean here is that our overall model is most easily understood as containing three separate components, each focused on a somewhat different task. We used this notional hierarchy when developing our networks architecture. It is important to note, however, that the entire model is simultaneously optimized, end-to-end, and thus all components are optimized for the singular classification task. We can think of the entire model as some complex classifier, or alternatively, a deep feature extractor, followed by a logistic regression.]: character embedding, feature detection, and a classifier.The character embedding component embeds the alphabet of printable English-language characters into a multi-dimensional feature space, thus encoding an input string's sequence of raw characters as a two-dimensional tensor.Using this tensor, the feature detection component detects important local sequence patterns within the full character sequence, and then aggregates this information into a fixed-length feature vector.Finally, the classification component classifies the detected features using a dense neural network.All of these components are optimized jointly using stochastic gradient descent.Fig. <ref> gives a formal diagram of our neural network architecture, which we also describe step by step in the text below. §.§.§ Character Embedding Our model starts its computational flow with the raw length s sequence of characters and embeds them into an s× m floating point matrix. This operation is a simply dictionary lookup, where each character, irrespective of the characters that came before it or after it, is mapped to its corresponding vector, and then these vectors are concatenated into this matrix. The matrix's rowsrepresent the sequence of characters in the original string, and the matrix's columns represent the dimensions of the embedding space.Embedding layer is optimized jointly with the rest of the model through back-propagation, optimizing the individual characters' embedding vectors to be more reflective of their semantic meaning, resulting in pairs of semantically similar characters being embedded closer to each other if they have similar attributes (e.g. they are both uppercase, both control control characters, etc.) <cit.>. This clustering of semantically similar characters makes it similar for the lower layers to identify semantically similar patterns in the string.In our implementation we set s=200 and m=32.For our URL-based experiments we use an input vocabulary of the 87 URL-valid characters, and for our file path and registry key experiments we use an input vocabulary of the 100 valid printable characters.Any unicode we encounter in our experiments we wildcard with a lower-case `x', reserving more sophisticated handling of international characters for future work.We set maximum string length on all artifacts to 200 since this is around the 95th percentile or greater of all strings in our experimental URL, file path, and registry datasets. If the string is shorter than 200, we pad it with a special null symbol in the front. If the string is longer than 200, we cutoff the beginning of the string. We empirically determined that m=32 provides a good tradeoff between accuracy and computational complexity. Note that 32 is much smaller then the potential 87-sized or 100-sized one-hot-encoding of these same characters, which is a common way to represent categorical input in machine learning models.We visually demonstrate that our trained model does indeed learn semantically related embedding in Fig. <ref>, where we show a two-dimensional MDS projection of our learned embeddings. As the Fig. <ref> shows, letters with similar semantics tend to cluster together, with upper case letters appearing near other upper case letters, lowercase letters near other lowercase letters, tilde, a highly important character in the URL case falling into its own cluster, etc.The fact that our character vectors cluster in this way suggests that our embedding representation is capturing their semantic meaning.§.§.§ Feature Detection Once we embed our input into an s × m matrix, the next step is extracting and aggregating locally detected features. This is done in two stages: in the first stage we detect local features by applying multiple kernel convolutions, Conv(t, k, n), and in the second stage we aggregate the results across the entire sequence by summing the kernels' activations using SumPool.We define both Conv and SumPool formally in our appendix.These steps are done separately for each k ∈{2,3,4,5}, and we empirically set t=256. The four results for each k tower are then concatenated together into a 1024 length vector.The t filters in our CNN spans the entire length of the character embedding m, and can be thought of as “sliding” of convolution kernels (or masks) over the sequence of character embeddings. The motivation for using convolutions as our feature extraction component flows from similar approaches in natural language processing (NLP) <cit.>.Computing convolutions on raw character embedding matrix is conceptually similar to traditional bag-of-words approaches. The main conceptional difference is that rather than directly detecting n-grams, we allow for “approximate” matches on semantically similar substrings.In this manner of thinking, each convolutional filter is responsible for detecting a distinct set of similar sequential patterns, and by summing up its activations over a text string, we obtain the degree to which these patterns occurs in the full string, similar to a bag-of-words aggregating all the n-grams.Just like the n-gram approach, our approach is robust to insertions and deletions within the character-string, as subsequences can occur anywhere in the string and still be detected by the convolution.§.§.§ Normalization and Regularization To speed up model training and prevent overfitting, we use layer-wise BatchNorm and Dropout(0.5) (0.2 for registry keys) between layers (see Fig. <ref> for details).We define both BatchNorm and Dropout in our appendix below.We found that layer-wise normalization gave better results than the more popular batch normalization <cit.>, and that putting the normalization after the activation gave equivalent results to putting it before each unit’s activation function.We also found that without regularization our model can easily overfit our training data, even when training on millions of samples.§.§.§ Classification Once we extract the features, we use a standard dense neural network to classify the string as malicious or benign.The dense neural network has two layers, a Dense(l) unit, followed by the DenseSigmoid(l) layer with l=1024 units. We define both Dense and DenseSigmoid in our appendix below.The dense layers learn a non-linear kernel given the convolution based features, and the sigmoid layer output provides the probability that the input string is malicious given the output of the final dense layer. We measure our detector's prediction loss using binary-cross entropy,ℒ(ŷ, y) = -1/N∑_i^N [ y_i logŷ_i + (1-y_i) log (1-ŷ_i) ]where ŷ is our model's prediction probability vector for all the URL samples and y is the vector of true label (0 for benign, and 1 for malicious). We use Adam <cit.> method to minimize eq. (<ref>).Typically, it is much easier to collect benign than malicious data, resulting in highly imbalanced dataset. Rather than simply reweighing individual values to equalize the overall contribution of the benign and malware in eq. (<ref>), we adjusted the benign to malware ratio directly during batch streaming. We generate 256-sized batch by first randomly sampling the full dataset of benign samples and randomly selected 128 samples, and then repeating the same approach for 128 malware samples. This effectively created an even class balance between malicious and benign data in our training dataset, with a more diverse representation of each class than simple reweighting of individual samples. We count one epoch as having processed 4096 batches, and train for 100 epochs.For our final solution we select the best overall model, determined by largest area under the ROC (AUC) for the time-split validation. §.§ Alternatives Attempted During the development of our current architecture we have also tried several alternative architectures that did not yield better performance, and/or where computationally too expensive:* Replacing regular convolutions with a series of dilated convolutions <cit.>.* Stacking embeddings, convolutions, and a long short term memory recurrent layer <cit.>.* Stacked convolutions to learn non-linear convolutional activations <cit.>.* ResNet type stacking of convolutions <cit.> § RESULTSWe evaluate our model against two baseline approaches, described below, on three different problems. The first problem involves identifying malicious URLs directly from the URL string. For this problem we downloaded 19067879 unique URLs, randomly sampled over a roughly two month period, from VirusTotal.For the second and third problem, identifying malicious registry key and file paths, we extracted over 18 million Cuckoo sandbox runs, as recorded on VirusTotal, and utilized all the observed file and registry writes and creations. This gave us 5590614 unique file paths, and 1661716 registry key paths. We give a detailed breakdown of these data in Table <ref>. §.§ Labeling Training and evaluating eXpose's performance requirs assigning a binary label to every artifact in our experimental datasets indicating whether it is malicious or benign.We label URL artifacts we used a voting approach, in which we assigned a label to the URL based on the score given by 59 anti-virus engines.If 5 or more of these engines assigned a “malicious” label to a URL we considered it malicious, and if no engines assigned a “malicious” label to a URL we labeled it benign.We discarded URLs with 1 to 4 anti-virus engine detections.Our motivation is that URLs may or may not be benign or malicious, and this uncertainty would introduce “label noise” into both the training of our model and our validation of its accuracy. Since registry keys and file paths can be ambiguous (e.g., both malware and benignware can write to same path), we took a different approach for labeling file and registry key paths.First, we labeled our corpus of binaries as either malicious or benign using a voting technique over an ensemble of 60 anti-virus engines, where binaries that had 5 or more anti-virus based detections were labeled as malicious and binaries that had 0 detections were labeled as benign.We then discarded binaries with between 1 and 4 detections from our dataset, as we regarded the question of whether these binaries were malicious or benign as ambiguous.Next we inspected the behavioral traces of the resulting 18 million binaries, counting how often each unique file path or registry key was created or written to in both benign and malicious sandbox runs.Finally, we labeled any file paths or registry keys that only occurred in malicious contexts as malicious, and labeled the other artifacts (benign or partially cases) as benign. §.§ Baseline Models We implemented two baseline models, one is standard general feature n-gram extractor, where we pool out a set of all possible 1-5 sized n-grams. The second model, used only for URLs, is based on manually extracted features described in <cit.>. These features include common sense statistics like: URL length, the number of `.' separators in a URL, and categorical lexical features like domain name and URL suffix tokens.Combined together, they form a very large, but sparse feature vector.To make training tractable at the scale of millions of examples, we use the feature hashing trick to randomly hash these features into 1024-dimensional vectors.Our motivation for picking a dimensionality of 1024 was two-fold.First, in order to tractably train our baseline models on millions of URL examples, a small feature vector is optimal.Second, given that the output of our novel model’s feature extraction is 1024 dimensional, we were interested in comparing a conventional 1024-dimensional representation of URLs with our deep learning representation, thereby answering the question, which representation is the richest representation for performing malicious URL detection?The above hashed features are fed directly into a deep MLP model.This MLP model is identical to our novel neural network model, except that we've stripped off the deep learning feature extraction layers, and replaced the input they provide with our manually constructed 1024-dimensional feature vector.This design is intended to highlight the potential contribution of our convolutional feature extractor in improvingdetection accuracy.§.§ Evaluation We present our results using a ROC curve between true positive and false positives rates. This measure is independent of the ratio of benign to malware in our dataset, and so is simplest to interpret. We focus on the low FPR rates 10^-4 and 10^-3, which from our experience represent a reasonable deployment threshold.The ROC curves for all three problems are shown in Fig. <ref>, and the specific values are given in Table <ref>.In addition to the ROC curves, we also present the two dimensional PCA projection of the normalized embedding vectors for the all individual characters in Fig. <ref>. The capital letters, lowercase letters, numbers tend to cluster together, while important special symbols like “/” and ”?” are fairly separated from the rest of the character. The results support our intuition behind inserting the embedding layer to provide a richer n-gram like detection, by clustering semantically similar characters together.Our results show that across the board convolution feature extraction outperforms other approaches. For example, at FPR 10^-3 eXpose has 6% higher detection rate than n-gram or expert derived features, with even larger improvement on file paths and registry keys problems. The fact that our tuned expert features are not able to outperform n-grams is potentially explain by our large dataset size. This is consistent with the well observed fact in the fields of NLP <cit.> and bioinformatics <cit.>, that a bag of n-grams is a highly effective representation in itself. The fact that the convolutional network is able to exceed n-gram performance in this large dataset setting suggest that embeddings with convolutional networks can be used as powerful automatic feature extractor.The overall results for the file paths and registry keys problem are worse than for URLs.This is not surprising because of the difficulty of properly labeling samples. Recall that our approach was to label any sample that had 0 occurrences in malware data as benign, and everything else was labeled malware. However, estimating if the probability sample being 0 is difficult because typically there is only one observation of the string in the datasets. Furthermore, file paths and registry keys have significantly less training data, which in our experience can significantly decrease our model's generalizability. While our CNN model outperform the n-gram model,one potential explanation is that there are too many collisions in the 1024-sized vector caused by feature hashing. Conversely, the vector is too large for the neural network to capture good relationships between the features. Therefore, we have also done the n-gram experiment with 512 and 2048-sized feature vector. These n-gram experiments yielded worse results than the 1024-sized result, and so are not shown.We note that, potentially, extensive re-architecture of the n-gram's neural network or switching to a different ML approach could yield better results than we presented. Furthermore, feature importance values, as computed using mutual information between label and individual feature vectors, L1 logistic-regression, random forest, or related methods can be used to significantly reduce the number of n-gram features that are hashed into the vector, thus also potentially improving results. The downside is that these methods in themselves require significant amount of tuning and multiple passes through a very large dataset. The advantage of our end-to-end learning is that we work directly with raw data, and so can simply utilize the same loading of samples in small batches no matter the architecture. With this approach is no loss of information that is inherent in features engineering, enabling very rapid prototyping.§ CONCLUSIONSWe developed and demonstrated the first, to our knowledge, convolutional neural network for extracting automatic features from short string in the context of cybersecurity problems. Using embeddings with convolutions as top layers in our neural network coupled with supervised training, allows us to implicitly extract a feature set that is directly optimized for classification. While similar approaches have been suggested for NLP, eXpose is the first approach that demonstrates how top to bottom deep-learning method can be adapted to several important cybersecurity problems in an adversarial environment, where strings are purposely obfuscated to prevent obvious feature extraction.One of the major issues during our experimentation was the computational cost of training on longer strings, that prevented us from trying more complex architectures. With current advances in hardware and distributed training modules added to modern frameworks, our results can potentially be further improved with some of the more computationally expensive architecture that we were unable to try.Looking forward, we hope that ideas integrated into eXpose will help guide the security industry into moving away from expensive feature engineering to directly utilizing already existing labeled datasets for end-to-end learning. As hardware and available datasets improve, the difference between automatically extracted features and traditional feature extraction approaches will only get starker.§ APPENDIX§.§ Components Our convolutional neural network is implemented in Python 2.7 using Keras v1.1 <cit.>. Below we describe our model's pre-defined set of components (or layers), which are described in terms of Keras build-in layers documented online:§.§.§ Embedding(s,m) An embedding layer that takes in a list of s integers representing the URL character list (each unique character is mapped to an associated unique integer in the range [1,Σ]), and outputs a matrix of floating points, where each original scalar integer value is represented by an m dimensional embedding vector. Σ is the size of the alphabet used to express the URLs. This operation is defined in Keras as Embedding(input_dim=s)§.§.§ Conv(t, k, n) A filter bank of t k-length one dimensional convolution kernels that convolve n adjacent m dimensional vectors, and are immediately followed by a non-linear ReLU activation. Defined in Keras as Convolution1D(t, k,input_shape=(s,m)), followed by Activation(relu). Note, we drop m in our notation, since it can be inferred from the previous layer, or otherwise defined in the text. §.§.§ BatchNorm Layer-wise batch normalization. Defined in Keras as BatchNormalization(mode=1).§.§.§ SumPool Sum of the input along the input length s, such that the output size is k, given the input size (s,k). The operation is defined in Keras as Lambda(f,output_shape=(k,)), where f(X)=K.sum(X, axis=1).§.§.§ Dropout(p) Dropout with probability p <cit.>. Defined in Keras as Dropout(p).§.§.§ Merge A merge operation that takes output from a previous set of layers, {k_1, k_2, k_3, k_4}, and concatenates them into a single matrix, [ k_1, k_2, k_3, k_4 ]^T. Defined in Keras as Merge(...,mode="concat").§.§.§ Dense(l) A fully connected linear unit with output size l, followed by a ReLU non-linear activation. Defined in Keras as Dense(l), followed by Activation(relu).§.§.§ DenseSigmoid Last layer used to generate a binary decisions. Same as 𝙳𝚎𝚗𝚜𝚎(1), but followed by a sigmoid (instead of ReLU) activation, defined in Keras as Activation(sigmoid).§ ACKNOWLEDGMENT We would like to thank Richard Harang and Joe Levy for their valuable feedback on early drafts of the manuscript and Hillary Sanders for in-depth discussion of our URL results. splncs03
http://arxiv.org/abs/1702.08568v1
{ "authors": [ "Joshua Saxe", "Konstantin Berlin" ], "categories": [ "cs.CR", "cs.LG" ], "primary_category": "cs.CR", "published": "20170227223213", "title": "eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry Keys" }
For an endomorphism it is known that if all the points in the manifold have dense sets of pre-images then the dynamical system is transitive. The inverse has been shown for a residual set of points but the the exact inverse has not yet been investigated before. Here we are going to show that under some conditions it is true for Anosov endomorphisms on closed manifolds, by using the fact that Anosov endomorphisms are covering maps. A theorem about two-body decay and its application for a doubly-charged boson H^±± going to τ^±τ^±Li-Gang XiaDepartment of Physics, Tsinghua University, Beijing 100084, People's Republic of China December 30, 2023 ======================================================================================================== § INTRODUCTIONIt is well known for non-injective endomorphisms that if for every point the set of pre-images of that point is dense in the manifold then the endomorphism is transitive (i.e. there exists a point that its orbit is dense in the manifold) and in <cit.> Lizana and Pujalz have used this to prove rigidity of transitivity for a special class of endomorphisms on 𝕋^n. A very important class of endomorphisms is the class of Anosov endomorphisms. In <cit.>, Lizana, Pinheiro and Varandas have shown that for robustly transitive local diffeomorphisms there is a residual set of point in the manifold such that the points in this set, each one has dense set of pre-images. Here, we use a topological approach, specially the fact that Anosov endomorphisms on a closed manifold are covering maps. We are going to investigate specially about the pre-images of periodic points and show the reciprocative of the well known result above is true for transitive Anosov endomorphisms under some conditions over the geodesics defined by eigenvectors of Df_x for every point. So the set of pre-images of every point is dense in the manifold. Also we will introduce a counterexample for the situation without those conditions.In this paper, we take all the manifolds to be a closed Riemannian manifold.Starting from <cit.> and <cit.>, the definition of Anosov endomorphism has been an important generalization method of the well known definition of Anosov diffeomorphisms; Let f∈^r(M), a compact subset Λ∈ M is called hyperbolic with respect to f, if for every point p∈Λ there is a splitting; T_pΛ=E^s _p⊕ E^u _p and there are C>0 and 0<λ<1 such that Df(E^s _p)=E^s _f(p), Df(E^u _p)=E^u _f(p) and for all integer n≥ 0;∀ v∈ E^s _p || Df_p ^n v||≤ Cλ ^n||v||,∀ u∈ E^u _p ||Df_p ^-nu||≤ Cλ ^-n||u||. If Λ =M then f is called Anosov diffeomorphism. <cit.> Define A:𝕋^2→𝕋^2 to be;[ 2 1; 1 1 ] (1)This is a linear map on ℝ^2 and its eigenvalues are 3±√(5)/2 which are greater and lesser than one and the eigenspace is the whole 𝕋^2 so it is an Anosov diffeomorphism. Also note that A=1.Considering the map f:M→ M, for every point x∈ M, the orbit of x, O_x is {f^n(x)|n∈ℕ}. The trajectory of x, (x_j)_j∈ℤ such that x_0=x, x_j∈{f^j(x) } and f(x_j)=x_j+1. Notice that if f is not injective then ({(x_j)_j∈ℤ})>1, but if it is an injective map then the trajectory of each point is unique. In the case where the map is not injective hyperbolicity is defined considering not just the points but their trajectories under the map. Let f:M→ M be a local diffeomorphism, f is called Anosov endomorphism if for every trajectory (x_n)_n∈ℤ with respect to f, for all i∈ℤ, Df(E^s _x_0)=E^s _f(x_0), Df(E^u _x_i)=E^u _x_i+1, T_x_iM=E_x_i ^s⊕ E_x_i ^u and there exist C>0 and 0<λ<1 such that;∀ v∈ E^s _x_i||Df_x_i ^n v||≤ Cλ ^n ||v||,∀ u∈ E^u _x_i||Df_x_i ^n u||≥ Cλ ^-n ||u||.There is also another way to define Anosov endomorphism; <cit.> A C^1 local diffeomorphism f:M→ M is called Anosov endomorphism if Df uniformly contracts a continuous sub-bundle E^s⊂ TM into itself, and the action of Df on TM/E^s is uniformly expanding. An important result about the definitions above is the continuity of the splitting defined in them ( see <cit.>,<cit.>).<cit.> Define B:𝕋^2→𝕋^2 to be;[ n 1; 1 1 ] (1),(n∈{3,4,5,...})The eigenvalues are (n+1)±√((n+1)^2-4(n-1))/2 and for n>2, like the previous example, both of them are greater than zero, one of them is lesser than and the other is greater than one and the eigenspace is the whole manifold so according to definition <ref>, this is an Anosov endomorphism. The main difference between Anosov diffeomorphisms and Anosov endomorphisms comes in to notice in the matter of structural stability. In his thesis Shub claimed that by a procedure similar to the expanding maps, non-injective Anosov endomorphisms are structurally stable. But in <cit.>, Przytycki proved him wrong, although in the same paper he showed the inverse limit stability of Anosov endomorphisms. Another main difference as it is mentioned above, is the definition of unstable manifolds based on the trajectories so that they can be non-unique <cit.>.An important characteristic of non-injective Anosov endomorphisms is that they are non-trivial covering maps of the manifolds they are defined on <cit.>. In this paper we are going to use this property among other things to show that under a certain condition an Anosov endomorphism is transitive if and only if the set of pre-images of any point is dense in the manifold;[Main theorem] Let f:M→ M be an Anosov endomorphism and for every point x∈ M, geodesics defined by eigenvectors of Df_x be dense in M or f is a product of maps with this condition then the set of pre-images of each point, is dense in M if and only if f is transitive. § PROOF OF THE MAIN THEOREMA continuous map f:M→ M is called transitive if for every pair of non-empty open sets U,V⊂ M, there exists n∈ℕ such that f^n(U)∩ V≠ϕ.There is this well known proposition about transitivity; (<cit.>, proposition 2.2.1) Let M be a complete space without any isolated point and f:M→ M,continuous, f is transitive, if and only if there exists p∈ M such that O_p=M. In the context of dynamical systems, because of the manifolds they take in to account, the proposition above is often considered as the definition of transitivity.Another well known result in the matter of transitivity is about hyperbolic linear automorphisms; <cit.> Let A:𝕋^2→𝕋^2 be a hyperbolic linear toral automorphism, A is transitive. If f is a diffeomorphism then definition <ref> is also true for f^-1, so in that, the set ℕ can be changed to {-1,-2,-3,...} and the definition remains intact. But in the case of Anosov endomorphisms, f^-1 is meaningless but we can still investigate the set of pre-images of the points in M under Anosov endomorphisms.In the following, we also need these two definitions; Let f:M→ M be a transitive Anosov endomorphism, for each point x∈ M we call the (E^s _x), index of f.Note that because of the continuity of the splitting in the definition of Anosov endomorphisms and because the map is transitive, index of f does not change by points. Let f:M→ M be an Anosov endomorphism with n number of pre-images for each point in M (n number of sheets for the covering it makes), we call n, the degree of an Anosov endomorphism f.For Anosov maps the degree is the same for every point because if there are points with different degree then the map has singularities in some points which is not possible for Anosov maps. Although by simple modifications, we can also deduce the result of this paper to the maps that have finitely many degrees over the manifold. Anosov endomorphisms arecovering maps and except for Anosov diffeomorphisms, they are not trivial and the manifolds on which they are defined, are evenly covered <cit.>. Since in this paper we take the manifold M to be a closed, there are a finite number of sheets for these covering maps and the number equals the degree of the Anosov endomorphism.Also the determinant of Jacobian of the Anosov endomorphism equals the degree of the map <cit.>. Let f:M→ M be a transitive Anosov endomorphism, (f,M) is a cover for M. Considering the endomorphism f, because M is compact there is a finite number of sheets (equal to the degree of f), S(1),S(2),S(3),...,S(k)⊂ M, each of them homeomorphic to M under f|_S(i):S(i)→ M and for every point x∈ M there is a d_1>0 such that if i≠ j, (x(i),x(j))>d_1 for all x(i) and x(j) in f^-1(x) and uniquely in S(i) and S(j). Also for every 1≤ j≤ k, S(i,j):=(f|_S(i))^-1(S(j))⊂ S(i) and f^2|_S(i,j)→ M is a homeomorphism. This also means that the interior of S(i,j) is not empty and (S(i,j))>0 for all i and j. So (f^2,M) is a cover for the manifold with exactly, k^2 sheets such that there are k sheets as subsets of each S(i), we denote them by S(i,1),S(i,2),…,S(i,k)⊂ S(i) and each of them is homeomorphic to M by f^2. So considering all S(i)s, there are k^2 sets S(i_1,i_2)⊂ M. By induction, for every n∈ℕ, (f^n,M) is a cover for M with k^n sheets. Also M is evenly covered and S(i_1,i_2,i_3,…,i_n)s do not intersect. For every sheet S(i_1,…,i_n), the map f^n|_S(i_1,…,i_n):S(i_1,…,i_n)→ M is a homeomorphism and there is d_n>0 such that for every pair of the nth pre-images of x, x(i_1,…,i_n) and x(j_1,…,j_n) respectively in sheets S(i_1,…,i_n) and S(j_1,…,j_n), (x(i_1,…,i_n),x(j_1,…,j_n))>d_n. We saw that S(i_1,…,i_n-1,i_n)s are subsets of S(i_1,…,i_n-1) and following this, step by step, finally they are subsets of S(i_1). In every sheet of (f^r,M) there are k sheets of (f^r+1,M) and d_r+1=d_r/k sod_r+1=d_1/k^r+1 and so on.About the distribution of the sheets of the covers (f^n,M)s, by the context above we have: For all open sets U⊂ M and for all n∈ℕ, there is a sheet S(i_1,…,i_n) of the cover (f^n,M) such that S(i_1,…,i_n)∩ U≠∅. As l∈ℕ gets greater, if we consider the sheets S(i_1,…,i_n,…,i_n+l)⊂ S(i_1,…,i_n) of (f^n+l,M) then there exists N∈ℕ such that for all m>N there is S(i_1,…,i_m)∩ U≠∅ and for all l∈ℕ, S(i_1,…,i_m,…,i_m+l)∩ U≠∅. If f was an expanding map then the intersection of the sheets with U would be inclusion which would give the density of pre-images of every point (See proposition <ref>).Similar to the diffeomorphism case we have the two following propositions; Let M be a compact metric space and f:M→ M be an endomorphism. If f is transitive then for every pair of non-empty open sets U and V in M, there is n∈ℕ such that f^-n(U)∩ V≠ϕ.Suppose U and V to be open sets in M and k be the degree of f. There is n∈ℕ such that we have f^n (U)∩ V≠∅ then f^-n(f^n (U)∩ V)≠∅; but f^-n(f^n (U)∩ V)=f^-n(f^n(U))∩ f^-n(V) and f^-n(f^n(U)) is the union of the sets U(i_1,…,i_n)=f^-n(f^n(U))∩ S(i_1,…,i_n). There is U(i_1,…,i_n)=U and because f^n is a covering map, each one of U(i_1,…,i_n)s is homeomorphic to U and U(i_1,…,i_n)∩ f^-n(V)≠∅ for all (i_1,…,i_n) (i_r∈{1,…,k}). Hence U∩ f^-n(V)≠∅.The following proposition is a crucial fact about Anosov endomorphisms; (<cit.>, proposition 3.2) Let f:M→ M be an Anosov endomorphism then per(f)=Ω(f). Now we want to see if there is a point which its set of pre-images is dense in the manifold, first we have this rather obvious result; Let f:M→ M be a transitive Anosov endomorphism; if a set is dense in M then also the set of its pre-images is dense in M. f is an Anosov endomorphism so, as we mentioned above f^n is a covering map for M for every n∈ℕ therefore each sheet of every cover (f^n,M) for M, is homeomorphic to M so if a set is dense in M then its pre-image in each sheet of the cover is dense in that sheet. M is the union of the sheets of a cover (f^n,M). Thus the set containing union of the pre-images of a dense set of M is dense in M. It implies that the points which have dense orbits have dense sets of pre-images; Let M be a closed manifold and f:M→ M be an Anosov endomorphism then every point with a dense orbit, has a dense set of pre-images.Suppose that p∈ M is a point with dense orbit. For each ϵ>0 there exists n∈ℕ such that {f(p),f^2(p),…,f^n(p)} is ϵ-dense in M. In every sheet S(i_1,i_2,…,i_n)⊂ M, of the cover (f^n,M) the subset of pre-images of the point p, (f^n |_S(i_1,i_2,…,i_n))^-1({f(p),f^2(p),…,f^n(p)}), is homeomorphic to {p,f(p),f^2(p),…,f^n-1(p)} under f^n:S(i_1,i_2,…,i_n)→ M, and it is ϵ-dense in 𝕄. Because ϵ and also S(i_1,i_2,…,i_n) are chosen arbitrarily, by Lemma <ref>, the set of the pre-images of p is dense in M. Notice that because a linear Anosov endomorphism is transitive and there is a large set of points with dense orbit under it in 𝕋^n, the Lemma and proposition above are true for such systems. Specially because the points with dense orbit are dense in 𝕋^n, Lemma <ref> shows that the set of the points with dense set of pre-images is at least dense in 𝕋^n. We are going to investigate this more precisely on closed manifolds.By modifying an important results about Anosov diffeomorphisms,<cit.> we have; The set of points with dense set of pre-images under a transitive Anosov endomorphism, is at least a dense set in M.For every ϵ>0 there exists a finite basis β _ϵ={B_1,B_2,…, B_n} for M consisting of ϵ-discs. Denote ∪ _i=1 ^∞ f^i (B_j) by E_j. Because f is an Anosov endomorphism, it is an open map and because f is also transitive, E_j is open and dense. M is a Bair space so ∩ _j=1 ^n E_j≠∅ and there exists a point p∈∩ _j=1 ^n E_j then for every 1≤ j≤ n there is i∈ℕ such that p∈ f^i(B_j). So f^-i(p)∩ B_j≠∅. Because it is true for all ϵ>0 and all the points in ∩ _j=1 ^n E_j, the set of the points with dense set of pre-images is dense in M.<cit.> Let f:M→ M be an Anosov endomorphism then there is ϵ such that for any trajectory (x_i)_i∈ℤ of any x∈ M the set;W^s _x_i,ϵ={y∈ M|∀ n∈ℕ(f^n(y),f^n(x_i))<ϵ}is a manifold which is called local stable manifold of x, and the set;W^u _x_i,ϵ={y∈ M|∃ (y_n)_-∞ ^0∀ n∈ℕ(y_-n,x_i-n)<ϵ}is a manifold which is called local unstable manifold of x_i related to the trajectory (x_i)_i∈ℤ under f. Following the theorem above we have; The sets;W^s _x=∪ _i=0 ^∞f^-n(W^s _f^n(x),ϵ)andW^u _x=∪ _i=0 ^∞f^n(W^u _x_-n,ϵ)respectively are called the stable and unstable sets of the point x∈ M. Notice that W^s(u) _x={y∈ M|(f^n(y),f^n(x))→ 0(∞) }. Also note that the stable and unstable sets defined above may not even be submanifolds if the degree of f is greater than one.If f:M→ M be a transitive diffeomorphism then the stable and unstable manifolds of every points are dense in M <cit.>. An essential concept that make this happen, is local product structure of M under f <cit.>. An endomorphism is locally diffeomorphism so by indicating τ such that W^u _τ be unique for each point, and modifying the definition for the Anosov-endomorphisms case we have; A closed hyperbolic invariant set is said to have a local product structure if for small ϵ<τ and δ, W^u _ϵ,x∩ W^s _ϵ,y is unique and belongs to the hyperbolic set whenever (x,y)<δ. Also in <cit.>, Przytcky has shown this in the inverse limit space. Therefore exactly the same as the diffeomorphism case,<cit.> we have; Let f:M→ M be a hyperbolic endomorphism, if Per(f) is hyperbolic then it has a local product structure. The maps we are studying are Anosov and by proposition <ref>, the set of periodic points is dense in M so the whole manifold has a local product structure under f and modifying proposition 5.10.3 of <cit.> we have; Let f:M→ M be an Anosov endomorphism and Ω(f)=M then the pre-images of stable and unstable sets are dense in M.With an argument like the diffeomorphism case, the unstable manifold of a point, is dense in M also Przytcky in <cit.>, has proved this by lifting f to inverse limit space. So by the proposition <ref> its set of pre-images is dense in M. We show that the set of pre-images of a stable manifold of every point is dense. By proposition <ref>, the set of periodic points under f, is dense in M so it is ϵ-dense in every sheet of each one of the covers (f^n,M), for every n∈ℕ. Suppose that ϵ is chosen such that there exists δ>0, if (x,y)<δ (x,y∈ M), then for each trajectory (y_i)_i∈ℤ, W^s _ϵ,x∩ W^u _ϵ, (y_i) contains exactly one point and following the statements before the proposition, if ϵ is small enough, it meets the conditions of local product structure definition. Now consider B:={p_i∈ Per(f)|i=1,2,…,N} to be an ϵ/4-dense set in M so that local unstable manifold of each point in B transversally intersects with local stable manifold of the points in B ϵ-close to it.Suppose that τ∈ℕ the product of the periods of all the points in B and put g=f^τ. Suppose that S(j_1,j_2,…,j_r) is a sheet of the cover (f^r,M) (let (f)=k) and {p_i (j_1,…,j_r)|i=1,2,…,N} is the pre-image of B in S(j_1,…,j_r), under g. Let W^s _x (j_1,…,j_r) be the pre-image of W^s _x for every x∈ M, in S(j_1…,j_r). We have; With the assumptions above, if (W^s _y (j_1,…,j_r)),p_i)<ϵ/2 and (p_i,p_l)<ϵ/2 then there are m∈ℕ and S(j_1,…,j_r,…,j_r+l), a sheet of the cover (g^m,M) and a subset of S(j_1,…,j_r), such that;(g^-m(W^s _y (j_1,…,j_r,j_r+1)),p_i (j_1,…,j_r,j_r+1))<ϵ/2and(g^-m(W^s _y (j_1,…,j_r,j_r+1)),p_l (j_1,…,j_r,j_r+1))<ϵ/2. There exists z∈ W^s _y (j_1,…,j_r)∩ W^u _ϵ/2,p_i(j_1,…,j_r) so there is a t_0∈ℕ such that (g^t (z),p_i)<ϵ/2 for every t>t_0. So (g^-t (z),p_l)<ϵ. Therefore like in the previous step there exists a point w∈ W^s _g^t (z)(j_1,…,j_r,…,j_t)∩ W^u _ϵ/2,p_l(j_1,…,j_r,…,j_t). Hence there is a b_0∈ℕ such that g^-b(w)∈ S(j_1,…,j_r,…,j_t,…,j_b) and (g^-b (w),p_l)<ϵ/2 for every b>b_0. Taking S(j_1,…,j_r,…,j_r+l)=S(j_1,…,j_r,…,j_r+t,…,j_b) and m=b_0 +t_0, the proof completes. Since M is compact and connected, any two periodic points p_1 and p_2 can be connected together by a path containing not more than N periodic points with less than ϵ/2 distance between any two consecutive periodic points. By the Lemma above, for any x∈ M and ϵ>0 g^-Nm(W^s _x) is ϵ-dense in a sheet of the cover (f^Nmτ,M) and a subset of S(j_1,…,j_r). Because it is correct for every ϵ and the sheet S(j_1,…,j_r) is chosen arbitrarily, the proposition follows. We saw that the set of pre-images of a point with dense forward orbit under a linear Anosov endomorphism A:𝕋^n→𝕋^n which is not an expanding map, is dense in 𝕋^n. About Anosov diffeomorphisms, this is it but for expanding maps we have this well known result; Let f:M→ M be an expanding map, each point in M have a dense set of pre-images in M.Suppose that D_ϵ is an ϵ-disk in M, for every ϵ>0. Since f is an expanding map, there exists H⊂ D_ϵ and n∈ℕ such that f^n(H)=M. Therefore for every p∈ M there is x∈ f^-n(p)∩ D_ϵ. For Anosov endomorphisms which are not diffeomorphisms or expanding maps, it is different from diffeomorphisms because they are non trivial covering maps also it is different from expanding maps because they also have a contracting factor. Therefor in addition to the points with dense orbit we are going to investigate about pre-images of the points that their orbits and hence their ω-limit sets have various topological properties.Notice that each periodic point under an Anosov endomorphism is also an image of a non-periodic point. It is because of the degree of the Anosov endomorphism being greater than 1 and pre-image set of any point contains more than one point but at most one of them is periodic. So we have; Let f:M→ M be an Anosov endomorphism then the set of pre-images of the set of all the periodic points, ∪_n∈ℕ f^-n(Per(f)) such that f^-i(Per(f))=∪_x∈ Per(f)f^-i(x), is dense in M.Because f is an Anosov endomorphism on a closed manifold M, we have Per(f)=M. So by Lemma <ref> the set containing all the pre-images of all the periodic points is dense in M. Now we investigate the pre-images of an arbitrary periodic point under transitive Anosov endomorphisms. First there are some examples that the pre-images of at least some of the fixed points are not dense in the manifold; Define B:𝕋^3→𝕋^3 to be;[ 2 0 0; 0 2 1; 0 1 1 ] (1).The eigenvalues are 2 and 3±√(5)/2 and it is a transitive Anosov endomorphism defined by the product of doubling map over S^1 and A in the example <ref> over 𝕋^2. For any point in 𝕋^3, the geodesic defined by the eigenvalue 2's eigenvector is S^1 which is not dense in M. obviously the set of pre-images of the fixed point (0,0,0)∈𝕋^3 is dense in S^1×(0,0) and is not dense in M. This can also be stated by this;If there exists a factor (A semi conjugate map) f:N→ N for the Anosov endomorphism F:M→ M such that f is an Anosov endomorphism or an expanding map and the projection of F on M/N is an Anosov diffeomorphism then the pre-images of any point p∈ M, are distributed in {p}× N and if p is fixed under F then, like the example above, its set of pre-images is not dense in the whole manifold.This condition in the study of rigidities in Anosov group actions is commonly called reducibility <cit.>.So it is possible for the pre-images of a fixed point (or periodic point) under an Anosov endomorphism to be dense in a non trivial subset of M but in many cases they are dense in M; Let f:M→ M be a transitive Anosov endomorphism such that for every point x∈ M geodesics defined by eigenvectors of Df_x are dense in M, then the periodic points have dense sets of pre-images under f.Without any loss of generality, let p∈ M be a fixed point and k to be the degree of f and {x(i)∈ S(i)|i∈{1,…,k}}=f^-1(p). Then define for each x(i)∈ f^-1;α_-1(x(i)):=min_x(j)∈ f^-1(p)((x(i),x(j)))and;β_-1:=max_x(i)∈ f^-1(p)(α_-1(x(i))).β_-1 is the maximum distance possible between a point in f^-1(p) and its nearest point in f^-1(p) that is not equal to the first one. Then for every n∈ℕ define β_-n for the points in f^-n(p) in the same way. (f^n,M)s are covers for M and since M is a closed manifold, as n gets bigger the volume of each sheet of the cover gets smaller accordingly and so the distance between the pre-images of each point gets smaller (See remark <ref>). Hence for every point x(i_1,…,i_n) in f^-n(p)={x(i_1,…,i_n)∈ S(i_1,…,i_n)|i_1,…,i_n∈{1,…,k}};α_-n(x(i_1,…,i_n))=min_x(j_1,…,j_n)∈ f^-n(p)((x(i_1,…,i_n),x(j_1,…,j_n)))≤α_-1(x(i_1))/k^nand similarly;β_-n=max_x(i_1,…,i_n)∈ f^-n(p)(α_-n(x(i_1,…,i_n)))≤β_-1/k^n.This means that for every ϵ>0 there is n∈ℕ such that β_-n<ϵ.Now connect each pair of points x(i_1,…,i_n) and its nearest points in {f^-n(p)} with geodesics by the length α_-n(x(i_1,…,i_n)), between them. By this procedure we will have a subset of M consisting of some connected components c^n _1,…,c^n _t_n (t_n∈ℕ), in which f^-n(p) is β_-n-dense. These components are disconnected because for every point in them there are other points in the same connected component with less distance than the points in other components. Now connect the components by a geodesic c^n(i,j) such that i,j∈{1,…,t_n}, from the two points x(i_1,…,i_n)∈ c^n _i and x(j_1,…,j_n)∈ c^n _j that have the least distance. We call this set ξ^-n(p). For every ϵ there is m>n and the cover (f^m,M) such that for x(i_1,…,i_n)∈ c^n _i and x(j_1,…,j_n)∈ c^n _j, (f^-m(x(i_1,…,i_n)),f^-m(x(j_1,…,j_n)))<ϵ. Now by connecting the points in f^-m(p) by geodesics and repeating the process above, we have ξ^-m(p) in which f^-m(p) is ϵ-dense. Thus for every ϵ there is m∈ℕ and ξ^-m(p)⊂ M such that f^-m(p) is an ϵ-dense subset in ξ^-m(p). As m goes to infinity there is a subset of M in which lim_m→∞f^-m(p) is dense; There exists the set ξ_p:=lim_m→∞ξ^-m(p) in which lim_m→∞f^-m(p) is dense.Suppose the opposite, so there is N∈ℕ and ϵ such that for all n>N there exists a point x∈ξ^-n(p) such that (x,f^-n(p))>ϵ. Then by considering the definition of ξ^-n(p)s, for all n>N there are x(i_1,…,i_n) and x(j_1,…,j_n) in f^-n(p) so that for every two trajectories (x_-m)_m∈ℕ and (y_-m)_m∈ℕ such that x_-j and y_-j respectively are in f^-m(x(i_1,…,i_n)) and f^-m(x(i_1,…,i_n));lim_m→∞((x_-m)_m∈ℕ,(y_-m)_m∈ℕ)≠0which by remark <ref> is a contradiction with the definition of x(i_1,…,i_n)s. Now let D_δ(p) be a δ-disc around p where δ<τ in definition <ref> and let V_is be the eigenvectors of df_p and W^i _ps be the geodesics defined by V_is. Also W^i _p,δ:=W^i _p∩ D_δ (p). Let (x_j) be a trajectory of p, in the pre-images of f, the contraction is on the W^i _x_js where W^i _p,δ⊂ W^u _p,δ and the sheets of the covers (f^n,M) are made because of that contraction. Also f^-n(p)⊂ f^-n(W^u _p,δ). So if there exists r_1,r_2,…,r_l such that W^i _p,δ⊂ W^u _p,δ, i=r_1,…,r_l, and W^i _ps are not dense in M then there exists a nowhere-dense subset L in M, that is defined in each point y∈ L by parallel translation of V^i _ps then f is an expansion on L and for all n∈ℕ, f^-n(L)⊂ L so if p∈ L then all the pre-images of p remain in L hence they are not dense in M. So those W^i _ps that W^i _p,δ⊂ W^u _p,δ have to be dense in M.The procedure above is the geometric counterpart of irreducibility because following that there cannot be any non-trivial endomorphism factor for the Anosov endomorphism (See remark <ref> and example <ref>). Now following the proof, let ξ_p,δ:=ξ_p∩ D_δ(p) and suppose that each W^i _p such that W^i _p,δ⊂ W^s _p,δ, is dense in M (notice that by this, also W^i _x for every x∈∪_n∈ℕf^-n(p), is dense in M), and define ξ_p,δ ^is by canonical projections π_i(ξ_p,δ)→ W^i _p in which W^i _p,δ⊂ W^s _p,δ. Then for every W^i _p, lim_m→∞f^-m(ξ_p,δ ^i) is dense in it and since each W^i _p is dense in M, lim_m→∞f^-n(ξ_p,δ ^i) is dense in M. Thus lim_m→∞f^-m(ξ_p) is dense in M and therefore the set of pre-images of p is dense in M. Due to the linear Anosov endomorphisms being transitive, the proposition above gives us; Let A:𝕋^n→𝕋^n be a linear Anosov endomorphism where A is an Anosov endomorphism of degree greater than one and eigenvectors of A define dense geodesics in 𝕋^n then the set of pre-images of a periodic point is dense in 𝕋^n. Let f:M→ M be an Anosov endomorphism, if the set of pre-images of a point p∈ M under f, is dense in M then the points in W^s (p) and W^u (p) have dense sets of pre-images under f.For all x∈ W^s _p (p∈ M), O_p∈ω (x). So if O_p has a dense set of pre-images in M then the set of pre-images of ω(x) is dense in M. Since ω(x) and following that O_x are dense in M, then by lemma <ref>, the pre-images of O_x is dense in M. Hence the set of pre-images of x is dense in the manifold. If x∈ W^u _p, O_p∈α (x) and clearly if O_p is dense or its set of pre-images is dense in M then x has a dense set of pre-images.Let f:M→ M be a transitive Anosov endomorphism of degree greater than one and for every point x∈ M geodesics defined by eigenvectors of Df_x are dense in M. Every point which is not periodic or its ω-limit set does not have a dense set of pre-images in M, has a dense set of pre-images.Suppose that x∈ M is a non-periodic point that also does not have a dense orbit. For these points we consider ω (x). If int(ω (x))≠∅ then by proposition <ref>, ∪_n∈ℕf^-n(ω(x)) is dense in M and by proposition <ref> the set of pre-images of x is dense in M. If int(ω (x))=∅, by a procedure like in the Proof of Theorem <ref> and considering the pre-images of ω(x) instead of the pre-images of a fixed point, and again by proposition <ref> the set of pre-images of x is dense in M. To sum it up, by propositions <ref>, <ref> and <ref> and theorem <ref> we have; Let f:M→ M be a transitive Anosov endomorphism of degree greater than one and for every point x∈ M geodesics defined by eigenvectors of Df_x are dense in M, then for every point, the set of pre-images is dense in the manifold. According to this and the proof of theorem <ref>, for product manifolds we also have; Let f:M→ M and g:N→ N be transitive Anosov endomorphisms such that the pre-images of any point in M and any point in N respectively under f and g are dense in M and N, then the pre-images of any point in M× N under (f,g) is dense in M× N. For every open set U⊂ M, V⊂ N and for every (p,q)∈ M× N, since ∪_n∈ℕf^-n(p) is dense in M, there is y_1∈(∪_n∈ℕf^-n(p))∩ U and in the same way, for (y_1,q)∈{y_1}× N there is (y_1,y_2)∈(∪_n∈ℕ(y_1,g^-n(q)))∩{y_1}× V. So for any open set U× V there exists a point (y_1,y_2) in (∪_n∈ℕ(f,g)^-n((p,q)))∩ U× V. For example for the product of a doubling map on S^1 and the map B in the example <ref>, the set of pre-images of any point in 𝕋^3 is dense in the manifold. In this way wee can define a collection of examples and non-examples by defining product spaces of expanding maps on S^1 and Anosov diffeomorphisms and endomorphisms on arbitrary manifolds.But what can be said about non-transitive Anosov endomorphisms? If we consider an endomorphism f:M→ M, according to Lemma <ref> and proposition <ref>, first we should find subsets of M in which f is transitive. In this matter by considering just forward orbits of points in M, we have Smale and Bowen's spectral decomposition theorem that is introduced for hyperbolic endomorphisms by Sakai. There are subsets in which there are points that their orbit is dense in those sets.Denote the non-wandering set of f by Ω, we have;[Smale-Bowen Spectral Decomposition Theorem] <cit.> Let f:M→ M be an endomorphism. f(Ω)=Ω and f:Ω→Ω is an Anosov endomorphism, there is a decomposition of Ω into disjoint closed sets P_1∪ P_2…∪ P_s such that; * Each P_i is f-invariant and f restricted to P_i is topologically transitive.* There is a decomposition of each P_i into disjoint closed sets X_1,i∪ X_2,i∪…∪ X_n_i,i such that f(X_j,i)=f(X_j+1,i), for 1≤ j≤ n+1, f(X_n_i,i)=(X_1,i) and the map f^n_i:X_j,i→ X_j,i is topologically mixing.P_is (i=1,2,…,s), introduced above, are called basic sets of f. If the degree of f is k then there are k pre-images of each P_i and for every point p∈ P_i its set of pre-images is a subset of ∪_n∈ℕ f^-n(P_i) where by the notion in remark <ref>, f^-n(P_i)=∪_j_1,…,j_kP_i(j_1,…,j_k).If there are more than one basic sets then by considering f|_P_is, according to Lemma <ref> and proposition <ref>, the set of points with dense set of pre-images is dense in the set of pre-images of P_is ∪_n∈ℕ f^-n(P_i) where i=1,2,…,s. But the set of pre-images of P_i cannot be dense in M because P_is are f-invariant; if x∈ P_i then O_x∈ P_i then if f^-1(x)⊈ f^-1(P_i) then there is y∈ f^-1(x)∩ P_j (j≠ i), so x=f(y)∈ P_j which is a contradiction and we have; Let f:M→ M be a hyperbolic endomorphism such that Ω(f)=P_1∪ P_2∪…∪ P_s (s>1) there are not any points with dense set of pre-images in M. Thus according to theorem <ref>, corollary <ref> and the proposition above we have the proof of theorem <ref>, our main theorem. § APPENDIXUsing the program MATLAB, here we have calculated and demonstrated the pre-images of the point (0,0)∈[0,1]×[0,1], respectively under B^5, B^10 and B^15 of the linear endomorphism B=[ 3 1; 1 1 ] in the Example <ref>. it shows that for each ϵ there is n such that B^-n((0,0)) is ϵ-dense in [0,1]×[0,1]. Hence the set containing all the pre-images of the point, is dense in 𝕋^2.< g r a p h i c s > < g r a p h i c s > < g r a p h i c s > 9stock M. Brin, G. Stock, Introduction to Dynamical Systems, Cambridge university press , (2003).Franks J. Franks, Anosov Diffeomorphisms, in Global Analysis, (Proc. Sympos. Pure Math., Vol. 14, Berkeley, Calif., 1968), Amer. Math. Soc., Providence, R.I., (1970), 61–93.LPV C. Lizana, V. Pinheiro, P. Varandas, Contribution to the Ergodic Theory of Robustly Transitive Maps, Disc. & Cont. Dynam. Sys., Vol. 35, No. 1, (2015). lizana C. Lizana, E. Pujalz, Robust Transitivity for Endomorphisms, Ergod. Th. & Dynam. Sys., (2013), 1082–1114.manepugh R. Mañé, C. Pugh,Stability of Endomorphisms, Warwick Dynamical Systems, (1974), 175–184.MTF. Micena, A. Tahzibi, On the Unstable Directions and Lyupanov Exponents of Anosov Endomorphisms,Fund. Math. 235 (2016), no. 1, 37–48.przytyckiF. Przytycki, Anosov Endomorphisms, Studia Mathematica, (1976), 249–285.sakai K. Sakai, Anosov Maps on Closed Topological Manifolds,J. Math. Soc. Japan 39 (1987), no. 3, 505–519.Shub M. Shub, Global Stability of Dynamical Systems Springer-Verlag, (1987).Spatzier R. Spatzier, On the Work of Rodriguez Hertz on Rigidity in Dynamical Systems Journal of Modern Dynamics 10, (2016), 191–207. W L. Wen, Differentiable Dynamical Systems. An Introduction to Structural Stability and HyperbolicityGraduate Studies in Mathematics, 173. American Mathematical Society, (2016).
http://arxiv.org/abs/1702.08167v4
{ "authors": [ "Mohammad saeed Azimi", "Khosro Tajbakhsh" ], "categories": [ "math.DS" ], "primary_category": "math.DS", "published": "20170227073419", "title": "Topology of pre-images under Anosov endomorphisms" }
Kiefer-Wolfowitz Algorithm is Asymptotically Efficient for a Class of Non-Stationary Bandit Problems Rahul Singh and Taposh Banerjee Rahul Singh is with the Laboratory of Information and Decision Systems (LIDS), Massachusetts Institute of Technology, Cambridge, MA 02139, USA; Taposh Banerjee is with SEAS, Harvard University, Cambridge, MA.rsingh12@mit.edu, tbanerjee@seas.harvard.edu.December 30, 2023 ========================================================================================================================================================================================================================================================================================================= We consider the problem of designing an allocation rule or an “online learning algorithm" for a class of bandit problems in which the set of control actions available at each time s is a convex, compact subset of ℝ^d. Upon choosing an action x at time s, the algorithm obtains a noisy value of the unknown and time-varying function f_s evaluated at x. The “regret" of an algorithm is the gap between its expected reward, and the reward earned by a strategy which has the knowledge of the function f_s at each time s and hence chooses the action x_s that maximizes f_s.For this non-stationary bandit problem set-up, we consider two variants of the Kiefer Wolfowitz (KW) algorithm i) KW with fixed step-size β, and ii) KW with sliding window of length L. We show that if the number of times that the function f_s varies during time T is o(T), and if the learning rates of the proposed algorithms are chosen “optimally", then the regret of the proposed algorithms is o(T), and hence the algorithms are asymptotically efficient. § INTRODUCTIONThe Multi-Armed Bandit problem (MABP) requires a player to play an arm at each time s=1,2,… from a set of arms. If X_s denotes the arm played at time s, then the player receives a random reward at time s, the distribution of which depends on X_s. The objective of the player is to maximize the expected value of the cumulative reward collected over a period of time T. The player does not know the mean value of the random reward as a function of the choice x, and hence the control action corresponding to the choice of arm to be played needs to balance an exploration-exploitation trade-off.This paper is concerned with a particular class of bandit problems in which the control action available to the player can be mapped to a convex compact subset of ℝ^d, i.e., the continnum bandit problem <cit.>, in which the mean reward of the arms is non-stationary. The addition of non-stationarity into the MABP adds to the complexity involved in the exploration-exploitation dilemma, since now the player's belief about the mean reward of an arm cannot depend upon past data that is “too old" because the reward distribution of arms might have changed since the time that information was collected. Thus, the learning rate of the player has to be suitably adapted to the rate-of-change of the mean reward function. § KIEFER WOLFOWITZ ALGORITHMLet 𝒟 be a compact and convex subset of ℝ^d. The original KW algorithm was designed in the context of maximizing a fixed function by obtaining noisy samples of the function values. We begin by describing the KW algorithm for the case when the function f:𝒟→ℝ to be optimized is fixed. The maximizer of f is denoted θ(f)∈𝒟. The vanilla version of the KW algorithm maintains, at each time-step s an estimate of the function maximizer, denoted as X_s=(X_s(1),X_s(2),…,X_s(d)). It then makes an estimate of the derivatives (∇ f)_X_s(i) of the unknown function f by sampling the function values at points X_s+ c_s e(i),i=1,2,…,d and X_s-c_se(i),i=1,2,…,d, where e(i) is the unit vector with 1 in the i-th place. Let F^+_s(i),F^-_s(i) be the noisy values of the function at X_s+c_se(i) and X_s-c_se(i) respectively.Denote by Y_s the estimated value of the derivative of function f at X_s. If Y_s=(Y_s(1),Y_s(2),…,Y_s(d)) is an estimate of ∇ f at X_s, we then have that,Y_s(i) = F^+_s(i)-F^-_s(i)/2c_s, i=1,2,…,d, where Y_s(i) is an estimate of (∇ f)(i), i.e., the i-th component of the gradientat X_s. Once an estimate of the derivative of f at X_s has been made, the KW algorithm then updates the estimate of maximizer as follows,X_s+1 = X_s + β_s Y_s,where β_s is called the learning rate. Typically the step sizes are chosen as β_s = s^-1/2, c_s = s^-1/4. A detailed description of the KW algorithm can be found in <cit.>.§ PAST WORKS AND CONTRIBUTIONSA survey of the results on MABP literature can be found in <cit.>. <cit.> is the first work to consider the continum bandit problem.KW algorithm was introduced in <cit.>, and since then its convergence rate, and the asymptotic distribution of the estimates have been established <cit.>. However, we note that in general the asymptotic convergence rate of an algorithm does not imply regret bounds.  <cit.> performs a regret analysis for the KW algorithm when the function is kept constant. In contrast with the work in <cit.> we consider the non-stationary set-up in which the distribution of the reward sequence, or the unknown function to be maximized, changes over time. A regret analysis in this case amounts to controlling the performance of the algorithm over all possible sequences of functions {f_s}_s=1^T,f_s∈𝒞.We analyze two popular variants of the KW algorithm for the context of non-stationary function maximization i) KW with constant step-size β, where β is the “learning rate" and ii) KW with sliding window of length L, also denoted “memory length". We impose restrictions on the class 𝒞 of allowable functions, and obtain bounds on the regret of KW_β, KW_L algorithms in terms of the degree of non-stationarity, i.e. the quantity Δ_T/T, where Δ_T is the number of times that the function f_s being sampled changes until time T. We obtain the optimal learning rate β^⋆, and window length L^⋆ in terms of Δ_T/T. We then show that if these KW variants use optimal β^⋆ (resp. L^⋆), then they are asymptotically efficient, i.e., their cumulative regret is asymptotically 0 if lim_T→∞Δ_T/T=0. § NON-STATIONARY FUNCTION MAXIMIZATION AND REGRETAt each time s=1,2,…, an allocation rule 𝒜 chooses the control action X_s∈𝒟⊂ℝ^d. We assume that 𝒟 is convex and compact. The (random) reward earned at time s is then equal to F_s.If 𝒜 chooses the action x, then the distribution of the reward at time s, i.e., F_s, is given by G(·,x,f_s(x)), and the mean value of the reward earned is f_s(x), i.e., 𝔼{F_s | X_s = x} = f_s(x). We assume that the functions f_1,f_2,… belong to a function class 𝒞, for each f∈𝒞, f(x) is bounded for all x∈𝒟. Equivalently, the algorithm 𝒜 obtains a “noisy version" of the true function f_s(·) evaluated at x. At time s, Algorithm 𝒜 observes its control action X_s and the reward F_s, however it does not observe the function f_s. The control algorithm/allocation rule 𝒜, for each time s, maps the history {X_n,F^+/-_n}_n=1^s-1 to an action x∈𝒟.Denote by f_[1:s] the sequence of functions f_1,f_2,…,f_s, and for a function f, denote by θ(f) the value of x that maximizes f. The total regret accumulated by an algorithm 𝒜 until time step T is then defined to be, ℛ(T,𝒜,f_[1:T]) = 𝔼{∑_s=1^T f_s(θ(f_s)) - f_s(X_s) }, where expectation is taken with respect to the probability measure induced by the control algorithm 𝒜 which makes the choice of the sampling sequence {X_s}_s=1^T and the observations {F^+/-_s}_s=1^T. We will be interested in worst-case regret of the algorithm 𝒜, i.e., the quantity, ℛ(T,𝒜) = sup_f_1:T: f_s ∈𝒞 ∀ s∈[1,T]ℛ(T,𝒜,f_[1:T]). The control algorithm 𝒜 is asymptotically efficient <cit.> if lim sup_T→∞ℛ(T,𝒜)/T = 0.Next, we impose some restrictions on the allowable function class 𝒞 that will enable us to obtain meaningful bounds on the regret.§ ASSUMPTIONS ON THE FUNCTION CLASS 𝒞 We now make certain assumptions on the function class 𝒞 from which the functions f_s,s=1,2,… are chosen. This allows us to obtain non-trivial bounds on the regret (<ref>). The conditions mentioned below are mostly taken from <cit.>.Let f∈𝒞. Then f is three times continuously differentiable for all x∈𝒟, and there exist positive constants K_1,K_2 such that the following hold for all x∈𝒟:-K_1 x-θ(f)^2≥ (x-θ(f))^⊺∇ f(x) ∇ f(x) ≤ K_2 x-θ(f).We refer to these conditions as Concavity-Like Condition (CL) and Linearly Bounded Growth Rate (LBG) respectively.There exists K_3>0 such that for all f∈𝒞 and x∈𝒟,f(θ)-f(x)≤ K_3x-θ(f)^2.We refer to this condition as Quadratically Bounded (QB) function. Other than the various “smoothness" criteria that we assumed on the function f, we also need to ensure that the sampling noise is sufficiently well-behaved. We impose a uniform bound on the noise variance at each sample point, i.e.,∫ (y-f(x))^2 g(y;x,f(x))dμ(y)<σ^2, ∀ f∈𝒞, x∈𝒟,where μ is a σ finite measure on ℬ(ℝ), i.e., Borel sets of ℝ, and g(·;x,u) is the density of the random reward earned when the control action is x, and the mean value of reward is f(x) = u. Examples of function classes 𝒞 which satisfy the above stated conditions can be found in <cit.>. We now state the KW algorithm with fixed step-sizes, i.e., β_s≡β and c_s≡ c. The following assumption on the function class 𝒞 is in the spirit of the Mean Value Theorem.Let M_s(X_s): =𝔼(F^+_s-F_s^-/2c|X_s=x,f_s=f ). If the parameter c is chosen to be sufficiently small, M_s(X_s) = ∇ f (X_s+ ϵ_X_s),where ϵ_X_s < ϵ, and moreover ϵ<c^2.§ VARIANTS OF KIEFER-WOLFOWITZ ALGORITHM FOR NON-STATIONARY BANDIT OPTIMIZATIONWe describe two variants of the basic KW algorithm, that are used when the function f of interest is time-varying. Throughout, for two functions a(t),b(t) we denotea(t)=o(b(t)) if lim sup_t→∞a(t)/b(t)=0. §.§ KW with fixed step-size β (KW_β)The KW algorithm with fixed step size has been discussed in <cit.>. It keeps the step-sizes β_s,c_s to be a constant instead of slowly decaying them to 0. Since the parameter β_s corresponds to the “learning" rate, the proposed algorithm places lesser weights to past samples, and hence “eventually forgets the past estimates".The KW with fixed step-size is stated as follows : Let β and c be “small" positive constants. The estimate of the optimal point at time s evolves as,X^i_s+1 = X^i_s+β{(F^+_s-F^-_s/2c)},where F^+_s,F^-_s are the measurement values at X_s +/- ce, and the vector e=(1,1,…,1). Henceforth, we will assume that the parameter c has been chosen to be sufficiently small so that the Condition <ref> is satisfied.§.§ KW with Sliding Window of length L (KW_L)In the second variant of the KW algorithm, we fix an integer L>0, which is called “window length" or “memory size". At each time s, the algorithm uses only the latest L function measurements in order to choose the action X_s. This is called KW with sliding window of length L, denoted KW_L. In the below, X_0∈𝒟 has been chosen at time s=0. At each time s=1,2,…, the KW_L algorithm utilizes the estimates of derivatives at past L sample values {X_n}_n=s-L^s-1, and chooses the action X_s according to,X_s = X_0 + ∑_n=1^min{L,s}β_n Y_n+ s-L/c_n, where β_n = n^1/2, c_n = n^1/4, and Y_n the estimate of the derivative at X_n and is given by (<ref>). Thus, the algorithm behaves as if at each time s, the original KW algorithm (<ref>)-(<ref>) restarts with an initial value of X_0, and the estimate of the maximizer gets updated L times.Since the sample values F^+/-_s that have been obtained at time s will not be utilized for generating actions X_s̃, s̃>s+L, the algorithm “forgets" samples that are “older" than L time units. This finite memory property enables it to adapt to non-stationary function. §.§ Trade-off in choosing learning rates β,LThe step-size β corresponds to the learning rate of KW_β algorithm, while the window length L corresponds to the “memory" of KW_L algorithm. Due to the non-stationary of the function f_s, there is a fundamental trade-off involved in choosing these parameters. If we have f_s≡ f, then choosing a large vale of L leads to a better convergence of the iterates to θ(f). However, when the f_s is time-varying, a large value of L will introduce the dependence of the current estimate X_s on the past values of f_t,t<s. Since f_t may not be equal to f_s, L must be chosen appropriately in order to achieve a trade-off between the twin objectives of achieving a low-regret, while simultaneoulsy adapting to the changing function f_s. § KW_Β PRELIMINARY RESULTS FOR STATIONARY CASE, F_T≡ F In this section we present some results that will be used in later sections in order to perform a regret analysis of the two variants of KW algorithm that have been introduced. Throughout this section we will assume that the function f that is being sampled is kept fixed, i.e., f_s≡ f, and θ is the maximizer. We begin by imposing a couple of conditions that are specifically utilized for analyzing the KW_β algorithm.[Uniform locally Lipschitz] For x,y∈𝒟 satisfying x-y≤ϵ, we have∇ f(x) -∇ f(y)≤ K_4 x-y, ∀ f∈𝒞. [Condition on step-size β] The step size β is chosen as β=c^2/1-α where α∈(0,1).Let us now write the update equation (<ref>) in more detail. We note that 𝔼(F_s|X_s=x) = f(x), and moreover the distribution of F_s conditioned on the action X_s=x is denoted G(·;x,f_s(x)) and thus the noise distribution depends both on value of sampled point x, and the value of function f_s(x). We denote the following,Y_s= F^+_s-F^-_s/2c,M_s(X_s):=𝔼(Y_s|ℱ_s) = 1/2c d(X_s,c),Z_s = Y_s - 𝔼(Y_s|ℱ_s-1)where d(x,c) denotes the vector of differences evaluated at x with a step-size of c. Z_s is the noise in observation of derivative. The recursion (<ref>) can thus equivalently be re-written as,X_s+1 = X_s + β(M_s(X_s)+ Z_s).From the recursion (<ref>), i.e., X_s+1 = (X_s + β M_s(X_s))+βZ_s we have that,X_s+1-θ^2 = X_s-θ^2 + β^2 M_s(X_s)^2+2β(X_s-θ) M_s(X_s)^⊺ +β^2 Z_s^2 +2β Z_s (X_s -θ+ β M_s(X_s))^⊺.Next, we use the conditions imposed on 𝒞 and obtain a simple-to-analyze recursion for analyzing the quantity 𝔼X_s-θ^2.If the Conditions 1,3 and 4 hold true, then for the recursions (<ref>), we have that,𝔼{X_s+1-θ^2|ℱ_s }≤γX_s-θ^2+ H(β),whereγ : = 1-2β K_1+2β^2 K^2_2<1, H(β):=β^2σ^2/c^2+2KK_4 βϵ+ 2β^2K_2^2ϵ^2, where K is the diameter of the set 𝒟, step-size β is chosen to be sufficiently small in order that γ<1, and σ̃^2:=4dσ^2The term β^2M_s(X_s)^2 can be bounded as followsβ^2 M_s(X_s)^2= β^2 ∇ f(X_s+ϵ_X_s)^2≤β^2K_2^2 X_s+ϵ_X_s-θ^2≤β^2K_2^2 (X_s-θ+ϵ)^2≤ 2β^2 K_2^2 (X_s-θ^2 + ϵ^2),where the first equality follows from Condition <ref>, while the first inequality follows from the inequality (<ref>) of Condition <ref>, while second inequality follows from the triangle inequality, and the last inequality follows since for x,y∈ℝ, we have (x+y)^2≤ 2(x^2+y^2).Next, we have (X_s-θ)M_s(X_s)^⊺ = (X_s-θ)∇ f(X_s+ϵ_X_s)^⊺ = (X_s-θ)(∇ f(X_s)+∇ f(X_s+ϵ_X_s)-∇ f(X_s))^⊺ = (X_s-θ)∇ f(X_s)^⊺ + (X_s-θ)(∇ f(X_s+ϵ_X_s)-∇ f(X_s))^⊺≤ -K_1 X_s-θ^2 + K K_4ϵ,where the first equality follows from Condition <ref>. For the last inequality, the bound on the first term follows from (<ref>), while that on the second term follows from Cauchy-Schwartz inequality used in conjunction with Condition <ref>.Next, it follows from (<ref>) that expectation of 2β Z_s (X_s-θ + β M_s(X_s))^⊺ conditioned on ℱ_s-1 is 0. Also, from Condition <ref> we have that β^2/c^2𝔼( Z_s^2|ℱ_s-1)= β^2𝔼( ∑_i=1^d(F^+_s(i)-F^-_s(i))^2|ℱ_s-1)≤β^24dσ^2/c^2, since the random variable Z_s conditioned on the filtration ℱ_s-1 is the value of noise in the current estimate of the function gradient, and we imposed a uniform bound on the variance of this noise. This yields us 𝔼(β^2 Z_s^2 +2β Z_s (X_s-θ + β M_s(X_s))^⊺ |ℱ_s)≤β^2σ̃^2/c^2.The proof is now completed by substituting the inequalities (<ref>), (<ref>) and (<ref>) in the expression (<ref>) and letting γ = 1-2β K_1+2β^2K_2^2 and H(β) as in (<ref>). §.§ Regret Analysis with fixed fTaking unconditional expectation in the expression (<ref>), and solving for the ensuing recursions we obtain,𝔼X_s-θ^2 ≤ H(β) (1-γ^s)/(1-γ) + x_0-θ^2 γ^s.It follows from Condition <ref> that the regret at time s, i.e., the quantityf(θ)-f(X_s) can be bounded in terms of the distance X_s-θ^2, 𝔼f(θ)-f(X_s)≤ K_3(H(β) (1-γ^s)/(1-γ) + x_0-θ^2 γ^s). Thus, we see that the instantaneous regret at time s or equivalently the “distance" of the current estimate X_s from the optimal point θ can be decomposed into the following two components: * Regret due to incomplete learning: i.e., the quantity K_3x_0-θ^2 γ^s which is the error between the current estimate X_s and the true maximizer θ. Note that for a fixed value of γ, this component decreases with increasing s, so that the KW_β algorithm improves upon the estimate of θ as it obtains more information about the function f with time.* Regret due to Noisy Estimate of ∇ f:K_3H(β) (1-γ^s)/(1-γ) resulting from noisy measurements of the gradients ∇ f(x). Note that if the step-size β was allowed to decay as in (<ref>), then the noise would “average-out" and its limiting contribution will be 0 almost surely.The regret decompositon (<ref>) throws light on the fundamental trade-off presented in the non-stationary setting. The contribution of 2) is increasing in the learning-rate β. Indeed, if the function were stationary, i.e., f_t ≡ f, one could asymptotically “stop-learning" by letting β_t → 0 asymptotically, so that 2) would vanish. Due to non-stationarity, β has to be kept constant at a “small value". However, for small values of β, from (<ref>) we have γ≈ 1-2K_1β, so that a small β implies a larger learning regret, i.e., the algorithm takes a long time to learn the function maxima. Thus, the “optimal" choice of β amounts to obtaining an optimal trade-off between the components 1) and 2) of the instantaneous regret.We will now evaluate the expressions for each of these regret terms. Consider the allocation rule (<ref>), i.e, KW with constant step-size β, applied to find the maximizer of an unknown function f∈𝒞. Let the time-horizon be fixed at T, and the function class 𝒞 and step-size β satisfy Conditions 1-6. The cumulative regret incurred during the period {1,2,…,T} can be upper-bounded as𝔼(∑_s=1^Tf(X_s)-f(θ)) ≤K_3H(β)T/1-γ + X_0-θ^2 K_3/1-γ.Consider the learning rateβ^⋆ =Λ/T^1/(2+α),where Λ = (K^2/σ̃^2)^1/(2+α) is a constant that depends upon the function class 𝒞, and σ̃^2=4dσ^2. The regret incurred by KW_β^⋆ is then upper-bounded asℛ(T,KW_β^⋆)/T≤K_3/2K_1( Λ^α T^-1/(2+α) +2KK_4Λ^α T^-1/(2+α)..+2Λ^3T^-3/(2+α) +K^2/ΛT^-(1+α)/(2+α)),and hence we have thatlim sup_T→∞ℛ(T,KW_β^⋆)/T =0. We note that since from Condition <ref> we have that for each f∈𝒞 the regret f(θ)-f(X_s) can be bounded within a factor of K_3 from X_s-θ^2, rest of the discussion will be focused on bounding the latter term, and we will occasionally call it “regret", or “estimation error".The instantaneous regret at time s is bounded as in (<ref>). The contribution of the term H(β) (1-γ^s)/(1-γ)is upper-bounded by H(β) 1/(1-γ), so that the cumulative regret due to the first term of (<ref>) is bounded by H(β) T/(1-γ). Also, ∑_s=0^Tx_0-θ^2 γ^s =x_0-θ^2 1-γ^T/1-γ≤ K^21/1-γ, where K is the diameter of the set 𝒟. This yields us the bound (<ref>). The proof of regret bound (<ref>) follows by substituting the value of β^⋆ from (<ref>), and γ,H(β) from (<ref>),(<ref>) into the bound (<ref>) and performing simple algebraic manipulations.§ REGRET ANALYSIS OF KW_Β FOR NON-STATIONARY CASE We begin by introducing some notation. Since the function f_s changes with time, let us denote by τ_1,τ_2,… the times at which the functions change. We will denote the set {x,x+1,…,y} by [x,y]. Thus, for each of the individual “episodes" comprising of time intervals [0,τ_1],[τ_1+1,τ_2],[τ_2+1,τ_3],…, we have that f_τ_i=f_τ_i + 1=⋯=f_τ_i+1-1. Also denote by Δ_T the number of episodes until time T.For a function f∈𝒞, let θ(f) be the value of x that maximizes the function f.Let θ_s denote the maxima of the function f_s. Thus, if s∈ [τ_i+1,τ_i+1], then θ_s=θ_τ_i=θ(f_τ_i). We will denote by θ_[1:T] the sequence θ_1,θ_2,…,θ_T, similarly for f_[1:T].Next, we will perform a sample-path performance analysis of KW_β algorithm. Thus, fix a sequence f_[1:T] with the corresponding θ_s sequence given by θ_[1:T] =θ_1,θ_2,…,θ_T. Moreover, for each episode i=1,2,…,Δ_T denote byT_i: = τ_i+1-τ_i, to be the “episode-length" or horizon length of episode i. Since the cumulative regret incurred over the time horizon T can be decomposed into the sum of regrets incurred during individual episodes composed of time intervals {[τ_i,τ_i+1-1]}_i=1^Δ_T, the regret incurred by KW_β is then equal to,𝔼∑_s=1^T f_s(θ_s) - f(X_s)= ∑_i=1^Δ_T𝔼𝔼{∑_s=τ_i^τ_i+1-1f_τ_i(θ_τ_i)-f_τ_i(X_s) |ℱ_τ_i},where ℱ_s is the filtration generated by the random variables {(X_n,Y_n,F^+/-_n)}_n=1^s. We now analyze the regrets incurred during the interval [τ_i,τ_i+1-1].We will work with the distance X_s-θ_s^2 in lieu of f_s(θ_s)-f_s(X_s), with the understanding that the regret can be upperbounded within a constant factor of the former by using Condition <ref>. Since during the episode i, the function f being sampled, and its maximizer θ(f) are equal to f_τ_i,θ_τ_i respectively, and the inequality (<ref>) holds for all f∈𝒞, we can use the bound (<ref>). Thus, the regret incurred during the i-th episode can be bounded by utilizing the bound (<ref>) developed in Lemma <ref>. However, the term X_0 will be replaced by the quantity X_τ_i-θ_i to account for the difference between the estimate X_τ_i at beginning of episode i, and the true maximizer τ_τ_i during episode i. Similarly, the horizon T will be replaced by the episode length T_i. This yields us,𝔼{∑_s=τ_i^τ_i+1-1X_s-θ_τ_i^2 |ℱ_τ_i} < H(β) T_i/1-γ + X_τ_i-θ_τ_i^2/1-γ,≤ H(β)T_i/1-γ + K^2/1-γ,where the second inequality follows since we can bound the distance X_τ_i-θ_τ_i by the diameter of the set 𝒟, i.e., K. Combining the above bound with the tower property of conditional expectations (<ref>), we obtain the following result.Consider the problem of designing optimal allocation rule for the non-stationary set-up, and for each time s=1,2,…,T, let the function f_s∈𝒞. Let the function class 𝒞 satisfy the conditions 1-5. The regret incurred by KW_β algorithm during the time horizon T is upper-bounded by,ℛ(T,KW_β) ≤H(β)K_3T/(1-γ) + K^2Δ_TK_3/1-γ,so that with the learning rate β set equal toβ^⋆ =Λ(Δ_T/T)^1/(2+α),where Λ = (K^2/σ̃^2)^1/(2+α), we have that (ℛ(T,KW_β^⋆)/T)2K_1/K_3≤Λ^α(T/Δ_T)^-1/(2+α)+2KK_4Λ^α(T/Δ_T)^-1/(2+α)+2Λ^3(T/Δ_T)^-3/(2+α)+K^2/Λ(T/Δ_T)^-(1+α)/(2+α),so that if Δ_T=o(T), we havelim sup_T→∞ℛ(T,KW_β^⋆)/T =0,§ REGRET ANALYSIS OF KW WITH SLIDING WINDOWWe begin with the case where the function is held fixed at f_s≡ f, and time-horizon is fixed at T. Let L denote the length of window, and θ be the maximizer of f. Next, we can apply Chung's Lemma as in Lemma III.5 of <cit.>, in order to analyze the asymptotic properties of the distance X_s-θ.Let the function class 𝒞 satisfy the Conditions <ref>-<ref>. For the KW with sliding window of length L applied to obtain the maxima of a stationary function f∈𝒞, the following is true. There exists an integer s_0>0 such that,𝔼X_s-θ^2 ≤K_5/√(L), ∀ s>max{s_0, L},where the constants K_5 and s_0 depend on the function class 𝒞 only through the values K_1,K_2,K_3. Throughout, we will assume that the window length L has been chosen so that it satisfies L>s_0, and hence the bound above can be written as 𝔼X_s-θ^2 ≤ K_5/√(L), ∀ s>L.Next, we consider the non-stationary set-up. Fix a sequence f_[1:T], and the corresponding θ_[1:T], and as before let Δ_T be the number of episodes until time T. Let us analyze the regret incurred during the i-th episode that is of duration T_i = τ_i+1-τ_i. Since the control action X_s generated at times s∈[τ_i+1,τ_i+1] is a function of the values {Y_n}_n=s-L^s-1, the regret bound (<ref>) which was derived for stationary set-up can now be applied only when s-(τ_i+1)>L or equivalently s>L+ τ_i+1. This gives us the following,Let KW_L algorithm be applied to the non-stationary function maximization problem. Consider the process X_s-θ during the episode i, which is comprised of time interval [τ_i+1,τ_i+1]. If the episode length τ_i+1-τ_i is greater than L, then, we have𝔼{X_s-θ_τ_i^2 | ℱ_τ_i}≤K_5/√(L), ∀ s∈[τ_i+L,τ_i+1].Thus, the total regret incurred during i-th episode can be bounded as follows,𝔼{∑_s∈[τ_i+1,τ_i+1]f_τ_i(θ_τ_i)-f_τ_i(X_s) | ℱ_τ_i}≤ K_3( K_5(T_i-L)^+/√(L) + (L∧ T_i)K)≤K_3K_5T_i/√(L) + LK_3K,where K is the diamater of the set 𝒟, and the function x^+ = max{x,0}, and for x,r∈ℝ, the function x∧ y=min(x,y).For the non-stationary bandit problem, the regret incurred by the KW_L algorithm during time period T can be bounded as,ℛ(T,KW_L) ≤K_3K_5T/√(L) + LK_3KΔ_T.The choice of L that minimizes the upper-bound is given by,L^⋆ = ( K_5/2KT/Δ_T)^2/3,so that the regret under KW(L^⋆) is bounded as,ℛ(T,KW(L^⋆))/T≤ K_5^2/3K^1/3(Δ_T/T)^1/3[ 2^1/3+1/2^2/3],Thus, if the number of episodes Δ_T = o(T), then we have,lim sup_T→∞ℛ(T,KW(L^⋆))/T = 0.The bound (<ref>) is obtained by utilizing the upper-bound (<ref>) on the regrets incurred during individual episodes, in conjunction with the tower property (<ref>) of conditional expectations. Rest of the proof involves simple algebraic manipulations, and is omitted due to space constraints. IEEEtran
http://arxiv.org/abs/1702.08000v2
{ "authors": [ "Rahul Singh", "Taposh Banerjee" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170226082411", "title": "Kiefer Wolfowitz Algorithm is Asymptotically Optimal for a Class of Non-Stationary Bandit Problems" }
Wen-An Li Simplified proposal for realizing multiqubit tunable phase gate in circuit QED1.6Department of Physics, School of Physics and Electronic Engineering, Guangzhou University, Guangzhou 510006, China We propose a scheme to realize multiqubit tunable phase gate in a circuit QED setup where two resonators each coupling with a qudit are interconnected to a common qudit (d=4). In this proposal, only two levels of each qudit serve as the logical states and other two levels are used for the gate realization. The proposal is efficient and simple because only a classical microwave pulse is needed, no matter how many qudits are involved, which significantly reduces experimental difficulty. In non-resonant case, the tunable phase gate can be achieved readily, while under the resonant condition a π-phase gate can be realized after a full cycle of Rabi oscillation where the gate speed is rather fast due to the resonant interaction. We have shown that the resulting effective dynamics allows for the creation of high fidelity phase gate. The influence of various decoherence processes such as the decay of the resonator mode, and the relaxation of the qudits is investigated. Moreover, the proposed scheme can be easily generalized to realize N-qubit phase gate.Simplified proposal for realizing multiqubit tunable phase gate in circuit QED Wen-An Li[E-mail: liwenan@126.com] and Yuan Chen December 30, 2023 ==============================================================================§ INTRODUCTIONQuantum computer holds promise that it owns the great power to solve classically intractable problems such as factoring a number <cit.> and searching a data in an array <cit.>.This is, in general, accomplished by performing specific unitary transformations on a set of quantum bits followed by measurement. The basic element of a computer is the logic gate, either in a classical computer or a quantum computer. Now it has been shown that one-qubit gates and two qubit controlled phase gates are universal for constructing a quantum computer, i.e. any multiqubit gates can be achieved by choosing appropriate set of these elementary gates. In practical quantum computing, the implementation of quantum algorithms and quantum error-correction protocols may involve multiqubit quantum gates <cit.>. As the number of the qubits increases, the procedure of decomposing multiqubit gates into several basic elementary gates becomes more and more complicated. It is necessary to develop a way to realize the multiqubit quantum gate directly. In particular, the multiqubit controlled phase gate which shifts the phase of only one of the state components is of great importance. This gate can be widely used in quantum algorithms <cit.>, quantum Fourier transform <cit.>, and quantum error correction  <cit.>. A number of theoretical schemes <cit.> have been proposed to implement the there-qubit quantum gate, and it has been demonstrated experimentally in nuclear magnetic resonance <cit.>, linear optics <cit.>, ion traps <cit.>, and circuit QED systems <cit.>. However, the controlled phase gates involving more than three qubits have not been experimentally implemented. Though an n-qubit controlled phase gate could be decomposed into the elementary one- and two-qubit gates, it requires much longer times and yields lower overall fidelities. For example, the Toffoli gate implemented with only single- and two-qubit gates requires six controlled-NOT gates and ten single-qubit operations <cit.>. Therefore, it is hard to realize the phase gates involving more than three qubits in any system owing to current limits on coherence.Recently, several schemes, such as n-control qubits acting on one target qubit <cit.>, one control qubit simultaneously controlling n target qubits, based on cavity QED or circuit QED <cit.>, have been proposed.For example, Yang et al <cit.> present a way to realize an n-qubit controlled phase gate with superconducting quantum-interference devices (SQUIDs) by coupling them to a superconducting resonator. Theimplementation of three-qubit phase gate requires seven operational steps, and adjusting the level spacings of the SQUID to couple corresponding energy level. In their proposal, the required steps for the n-qubit controlled phase gate is 2n+1, which make the experimental procedure much complicated and difficult to perform with the increase of the number of qubits. Zhang et al <cit.> proposed a scheme for one-step implementation of an n-qubit controlled-phase gate in a superconducting quantum interference device system. The scheme focuses on that n SQUID qubits simultaneously and nonidentically couple to a resonator mode and the microwave pulses, which requires individual addressing on each qubit. It means that n classical microwave fields are needed to drive n qubits in the same resonator, which poses a challenge to the present experimental condition as the number of qubits increases. Due to the large detuning, the gate speed is greatly limited to the order of μ s. Moreover, the phase can not be tunable. It is just a π-phase gate. Here, we propose a scheme for realization of multiqubit tunable phase gate with only one step. This scheme differs remarkably from others due to the fact that we employ the quantum Zeno dynamics <cit.> and the distributed experimental setup where n-1 qubits located in n-1 different resonators respectively. Compared with previous proposals, our scheme owns several advantages as following: (i) individual addressing on each qudit is not required and only a classical microwave pulse is needed to drive the central qudit A, which greatly loosens the requirement for the experimental conditions; (ii) the time needed to complete the gate can be reached to the order of nanosecond, which is much faster than previous schemes <cit.>; (iii) in the non-resonant case, the phase is tunable. It can be adjusted by changing the Rabi frequency of the pulse applied to the target qubit, the detuning and the interaction time.This paper is organized as follows. In Sec. II, we briefly introduce the our model with four-level quantum systems coupled to resonators where are connected by the common coupler, and how to realize the gate within such a system. In Sec. III, we give a brief discussion of the effectiveness of our model through numerical simulation. In Sec. IV, we generalize the model to N-qubits case. A concluding summary is presented in Sec. V.§ MODEL AND EFFECTIVE DYNAMICSWe consider a system consisting of two resonators each hosting a fluxonium qudit <cit.> and interconnected by a common fluxonium qudit A capacitively, as shown in Fig. <ref>(a). The fluxonium qudit is biased properly to have four lowest levels, which are denoted by |f_j⟩, |s_j⟩, |g_j⟩ and |e_j⟩ (j=1,2,A), respectively [Fig. <ref>(b) and (c)]. For all the qudits, the resonator mode is off-resonant with the transition |e⟩↔|g⟩ while decoupled from the transition between any other two levels of fluxonium qudits. Here, g_j (j=1,2,A) is thecoupling strength between the resonator mode and the |e_j⟩↔|g_j⟩ transition. Δ is the detuning between the |e⟩↔|g⟩ transition. For the qudit A, the transition |e_A⟩↔|s_A⟩ is driven dispersively by a classical microwave pulse with Rabi frequency Ω and detuning Δ. In the interaction picture, the Hamiltonian of the whole system can be written as H=H_1+H_2,withH_1=Δ∑_i=1,2,A |e⟩_i ⟨ e|+(Ω |s⟩_A ⟨ e|+h.c.), H_2=g_1a_1^†|g⟩_1⟨ e|+g_2a_2^†|g⟩_2⟨ e|+g_A∑_i=1,2a_i^†|g⟩_A⟨ e|+h.c.,where H is the total interaction of the whole system, H_1,2 is the interaction between the qudits and the resonators (or classical microwave pulse), and a_j is the annihilation operator of resonator j. It is noted that |f_i⟩ is decoupled with the qudit-resonator interaction, thus it disappears in Eq.(<ref>).For simplicity, we assume g_j (j=1,2,A) and Ω are all real, and g_1=g_2=g_A=g. To implement the three qubit quantum phase gate, we here use the asymmetric encoding scheme. The logic states of qubit 1 and qubit 2 are represented by the state |f⟩ and |g⟩, while the logic states of qubit A are represented by |f⟩ and |s⟩. With this, the three qubit computational basis corresponds to {|f_1f_2f_A⟩,|f_1g_2f_A⟩,|f_1f_2s_A⟩,|f_1g_2s_A⟩,|g_1f_2f_A⟩,|g_1g_2f_A⟩, |g_1f_2s_A⟩,|g_1g_2s_A⟩}.First, we consider the case that the system is initially in the |f_1f_2s_A⟩|0,0⟩_c, where |0,0⟩_c denotes the vacuum state of the resonator mode 1 and 2 respectively. As a consequence, it will be constrained in the subspace spanned by {|f_1f_2s_A⟩|0,0⟩_c, |f_1f_2e_A⟩|0,0⟩_c, |f_1f_2g_A⟩|1,0⟩_c, |f_1f_2g_A⟩|0,1⟩_c}. In such a subspace, we can rewrite the Hamiltonian asH_1^' = Δ/2(-|ϕ_1⟩+|ϕ_2⟩)(-⟨ϕ_1|+⟨ϕ_2|) +[Ω/√(2)|f_1f_2s_A⟩|0,0⟩_c(-⟨ϕ_1|+⟨ϕ_2|)+h.c.], H_2^'=-√(2)g|ϕ_1⟩⟨ϕ_1|+√(2)g|ϕ_2⟩⟨ϕ_2|.Here,|ϕ_1⟩=1/2(-√(2)|f_1f_2e_A⟩|0,0⟩_c+|f_1f_2g_A⟩|1,0⟩_c+ |f_1f_2g_A⟩|0,1⟩_c), |ϕ_2⟩=1/2(√(2)|f_1f_2e_A⟩|0,0⟩_c+|f_1f_2g_A⟩|1,0⟩_c+ |f_1f_2g_A⟩|0,1⟩_c), |ϕ_3⟩=1/√(2)(-|f_1f_2g_A⟩|1,0⟩_c+ |f_1f_2g_A⟩|0,1⟩_c),are the eigenstates of H_2 with the eigenvalues -√(2)g, √(2)g, 0, respectively. Under the unitary transformation e^iH_2^' t, we further obtainH_1^'' = Δ/2(|ϕ_1⟩⟨ϕ_1|+|ϕ_2⟩⟨ϕ_2|. .-|ϕ_1⟩⟨ϕ_2|e^-2√(2)igt-|ϕ_2⟩⟨ϕ_1|e^2√(2)igt) +[Ω/√(2)|f_1f_2s_A⟩|0,0⟩_c(-⟨ϕ_1|e^√(2)igt+⟨ϕ_2|e^-√(2)igt)+h.c.].Assuming the conditions g≫Ω are satisfied, we can readily discard the fast-oscillating terms in H_1^'', then obtain the effective HamiltonianH_1,eff^''=Δ/2(|ϕ_1⟩⟨ϕ_1|+|ϕ_2⟩⟨ϕ_2|).This effective Hamiltonian does nothing to the initial state |f_1f_2s_A⟩|0,0⟩_c, thus the initial state remains unchanged.Next, we consider the case that the system is initially in the state |f_1g_2s_A⟩ |0,0⟩_c. The system will evolve in the subspace {|f_1g_2s_A⟩|0,0⟩_c, |f_1g_2e_A⟩|0,0⟩_c, |f_1g_2g_A⟩|1,0⟩_c, |f_1g_2g_A⟩|0,1⟩_c, |f_1e_2g_A⟩|0,0⟩_c}. The relevant Hamiltonian of the system can be rewritten asH̅_1 = Δ[N_+(|ϕ_1^'⟩+|ϕ_2^'⟩)-N_-(|ϕ_3^'⟩+|ϕ_4^'⟩)] ×[N_+(⟨ϕ_1^'|+⟨ϕ_2^'|)-N_-(⟨ϕ_3^'|+⟨ϕ_4^'|)] +Δ[-N_+^'(|ϕ_1^'⟩+|ϕ_2^'⟩)+N_-^'(|ϕ_3^'⟩+|ϕ_4^'⟩)] ×[-N_+^'(⟨ϕ_1^'|+⟨ϕ_2^'|)+N_-^'(⟨ϕ_3^'|+⟨ϕ_4^'|)]+(Ω|f_1g_2s_A⟩|0,0⟩_c[N_+(⟨ϕ_1^'|+⟨ϕ_2^'|).. ..-N_-(⟨ϕ_3^'|+⟨ϕ_4^'|)]+h.c.), H̅_2=∑_i=1^4λ_i|ϕ_i^'⟩⟨ϕ_i^'|,where N_+=√(5+√(5))/2√(5), N_-=√(5-√(5))/2√(5), N_+^'=(1-√(5))√(5+√(5))/4√(5), N_-^'=(1+√(5))√(5-√(5))/4√(5). The eigenvectors of the interaction Hamiltonian H_2 are listed as following|ϕ_1^'⟩ = 1/√(5+√(5))(1+√(5)/2|f_1g_2e_A⟩|0,0⟩_c-|f_1g_2g_A⟩|1,0⟩_c. .-1+√(5)/2|f_1g_2g_A⟩|0,1⟩_c+|f_1e_2g_A⟩|0,0⟩_c), |ϕ_2^'⟩ = 1/√(5+√(5))(1+√(5)/2|f_1g_2e_A⟩|0,0⟩_c+|f_1g_2g_A⟩|1,0⟩_c. .+1+√(5)/2|f_1g_2g_A⟩|0,1⟩_c+|f_1e_2g_A⟩|0,0⟩_c), |ϕ_3^'⟩ = 1/√(5-√(5))(1-√(5)/2|f_1g_2e_A⟩|0,0⟩_c+|f_1g_2g_A⟩|1,0⟩_c. .+1-√(5)/2|f_1g_2g_A⟩|0,1⟩_c+|f_1e_2g_A⟩|0,0⟩_c), |ϕ_4^'⟩ = 1/√(5-√(5))(1-√(5)/2|f_1g_2e_A⟩|0,0⟩_c-|f_1g_2g_A⟩|1,0⟩_c. .-1-√(5)/2|f_1g_2g_A⟩|0,1⟩_c+|f_1e_2g_A⟩|0,0⟩_c),with eigenvalues λ_1=-1+√(5)/2g, λ_2=1+√(5)/2g, λ_3=1-√(5)/2g, λ_4=-1-√(5)/2g. Similarly, under the unitary transformation e^iH̅_2t and the condition g≫Ω, the H̅_1 becomesH̅_1,eff = Δ (N_+^2+N_+^' 2)(|ϕ_1^'⟩⟨ϕ_1^'|+|ϕ_2^'⟩⟨ϕ_2^'|) +Δ (N_-^2+N_-^' 2)(|ϕ_3^'⟩⟨ϕ_3^'|+|ϕ_4^'⟩⟨ϕ_4^'|).Obviously, the effective Hamiltonian also does nothing to the initial state |f_1g_2s_A⟩|0,0⟩_c and the |f_1g_2s_A⟩|0,0⟩_c do not undergo any change during the interaction. Moreover, it is noted that the system will undergo the similar evolution with the initial state |g_1f_2s_A⟩|0,0⟩_c due to the exchange symmetry between qubit 1 and 2.Furthermore, if the system is assumed to be prepared in the state |g_1g_2s_A⟩|0,0⟩_c, the system will be constrained in the subspace spanned by{|g_1g_2s_A⟩|0,0⟩_c, |g_1g_2e_A⟩|0,0⟩_c, |g_1g_2g_A⟩|1,0⟩_c, |e_1g_2g_A⟩|0,0⟩_c, |g_1g_2g_A⟩|0,1⟩_c, |g_1e_2g_A⟩|0,0⟩_c}. The Hamiltonian of this subsystem is dominated byH̃_1 = Δ/3(|ϕ_1^''⟩+|ϕ_2^''⟩-|ϕ_5^''⟩)(⟨ϕ_1^''|+⟨ϕ_2^''|-⟨ϕ_5^''|) +Δ/2[1/3(|ϕ_1^''⟩+|ϕ_2^''⟩+2|ϕ_5^''⟩)(⟨ϕ_1^''|+⟨ϕ_2^''|+2⟨ϕ_5^''|). .+(|ϕ_3^''⟩+|ϕ_4^''⟩)(⟨ϕ_3^''|+⟨ϕ_4^''|)] +[Ω/√(3)|g_1g_2s_A⟩|0,0⟩_c(⟨ϕ_1^''|+⟨ϕ_2^''|-⟨ϕ_5^''|)+h.c.], H̃_2=∑_i=1^5λ_i^ '|ϕ_i^''⟩⟨ϕ_i^''|,where the corresponding eigenvectors of H_2 in such a subsystem are|ϕ_1^''⟩ = 1/2√(3)(2|g_1g_2e_A⟩|0,0⟩_c-√(3)|g_1g_2g_A⟩|1,0⟩_c. +|e_1g_2g_A⟩|0,0⟩_c-√(3)|g_1g_2g_A⟩|0,1⟩_c .+|g_1e_2g_A⟩|0,0⟩_c), |ϕ_2^''⟩ = 1/2√(3)(2|g_1g_2e_A⟩|0,0⟩_c+√(3)|g_1g_2g_A⟩|1,0⟩_c. +|e_1g_2g_A⟩|0,0⟩_c+√(3)|g_1g_2g_A⟩|0,1⟩_c .+|g_1e_2g_A⟩|0,0⟩_c), |ϕ_3^''⟩ = 1/2(|g_1g_2g_A⟩|1,0⟩_c-|e_1g_2g_A⟩|0,0⟩_c -|g_1g_2g_A⟩|0,1⟩_c+|g_1e_2g_A⟩|0,0⟩_c), |ϕ_4^''⟩ = 1/2(-|g_1g_2g_A⟩|1,0⟩_c-|e_1g_2g_A⟩|0,0⟩_c +|g_1g_2g_A⟩|0,1⟩_c+|g_1e_2g_A⟩|0,0⟩_c), |ϕ_5^''⟩=1/√(3)(-|g_1g_2e_A⟩|0,0⟩_c+|e_1g_2g_A⟩|0,0⟩_c+|g_1e_2g_A⟩|0,0⟩_c),with eigenvalues λ_1^'=-√(3)g, λ_2^'=√(3)g, λ_3^'=-g, λ_4^'=g, λ_5^'=0. In the interaction picture with respect to H̃_2, considering the condition g≫Ω and discarding the fast-oscillating terms, then we can obtainH̃_1^'=Δ|ϕ_5^''⟩⟨ϕ_5^''|-(Ω/√(3)|g_1g_2s_A⟩|0,0⟩_c⟨ϕ_5^''|+h.c.). Non-resonant case: Set Δ≫Ω, then there are no any energy exchange between the state |g_1g_2s_A⟩|0,0⟩_c and |ϕ_5^''⟩ due to the large detuning. Consequently, the effective Hamiltonian of the subsystemH̃_1,eff^'=Ω^2/3Δ|g_1g_2s_A⟩|0,0⟩_c ⟨ 0,0|⟨ g_1g_2s_A|is obtained. Under the action of H̃_1,eff^', we obtain |g_1g_2s_A⟩|0,0⟩_c→exp(iΩ^2t/3Δ)|g_1g_2s_A⟩|0,0⟩_c. The other computational states |f_1x_2f_A⟩ and |g_1x_2f_A⟩ (where x=f, g) are decoupled from the Hamiltonian and do not undergo any change during the evolution of the system. In this way, the system keeps in the initial state with an tunable additional phase shift. Therefore, we obtain a three qubit tunable phase gate|f_1f_2f_A⟩→|f_1f_2f_A⟩|f_1g_2f_A⟩→|f_1g_2f_A⟩|f_1f_2s_A⟩→|f_1f_2s_A⟩|f_1g_2s_A⟩→|f_1f_2s_A⟩|g_1f_2f_A⟩→|g_1f_2f_A⟩|g_1g_2f_A⟩→|g_1g_2f_A⟩|g_1f_2s_A⟩→|g_1f_2s_A⟩|g_1g_2s_A⟩→ e^iδ|g_1g_2s_A⟩with δ=Ω^2t/3Δ being the phase. Additionally, if δ=π, this transformation plus the Hadamard gate on the qubit A with |f_A⟩→(|f_A⟩+|s_A⟩)/√(2), |s_A⟩→(|f_A⟩-|s_A⟩)/√(2), we can obtain a three qubit Toffoli gate.Resonant case: When Δ=0, after time t, the state of the system becomes cos(Ω t/√(3))|g_1g_2s_A⟩|0,0⟩_c+isin(Ω t/√(3))|ϕ_5^''⟩. After a full cycle of Rabi oscillation, i.e., t=√(3)π/Ω, we have -|g_1g_2s_A⟩|0,0⟩_c. Thus, the system returns to the initial state with an additional phase shift π. In this way, we obtain a three-qubit controlled phase gateU_p=e^iπ |g_1g_2s_A⟩|0,0⟩_c⟨ 0,0|⟨ g_1g_2s_A|,in which, if and only if the three qubits are in the state |g_1g_2s_A⟩|0,0⟩_c, the system undergoes a phase shift π. § DISCUSSIONS AND NUMERICAL ANALYSISIn order to validate the feasibility of the above theoretical analysis, we perform a direct numerical simulation of the Schrödinger equation with the original Hamiltonian Eq. (<ref>) (without decoherence). In non-resonant case, we choose the typical parameters: Ω=0.1g and Δ=g. In the simulation, we calculated the temporal evolutions of the system beginning with three distinct initial states |f_1f_2s_A⟩|0,0⟩_c, |f_1g_2s_A⟩|0,0⟩_c and |g_1g_2s_A⟩|0,0⟩_c. As shown in the Fig.<ref>(a), the blue, green and red lines represent the real parts of the coefficients of the basis states |f_1f_2s_A⟩|0,0⟩_c, |f_1g_2s_A⟩|0,0⟩_c and |g_1g_2s_A⟩|0,0⟩_c, respectively. It is seen that, the system returns to its initial state but obtains a global phase shift π at the time τ=3πΔ/Ω^2=300π/g, when the system is initially prepared in the state |g_1g_2s_A⟩|0,0⟩_c, while it is almost unchanged for the initial state |f_1f_2s_A⟩|0,0⟩_c and |f_1g_2s_A⟩|0,0⟩_c. Furthermore, we also consider the resonant case with parameters Ω=0.1g and Δ=0 in Fig.<ref>(c). At the time τ^'=√(3)π/Ω=10√(3)π/g, the system returns to the initial state with an additional phase π, which is much shorter than the time required in the non-resonant case. In particular, Fig.<ref>(b) and (d) shows the enlarged part of Fig.<ref>(a) and (c) respectively. It represents that state |f_1f_2s_A⟩|0,0⟩_c and |f_1g_2s_A⟩|0,0⟩_c arealmost unchanged during the process. The validity of our scheme is based on the assumption that all the coupling strengths of qudit-resonator mode are equal, namely, g_1=g_2=g_A=g. However, there could be deviation in the parameters in a practical situation. These errors result in the mismatch of the coupling constants g_1 and g_2. So we should consider the influence of thedeviation from theoretical situation g on the fidelity of the three-qubit phase gate, which is defined as F=|⟨ψ(τ)|U_p|Ψ(0)⟩|^2, where |Ψ(0)⟩ is the initial state of the qubits and |ψ(τ)⟩ is the final state under the evolution of the original Hamiltonian Eq.(<ref>) at time τ. Here we consider a general input state |Ψ(0)⟩=c_1|f_1f_2f_A⟩+c_2|f_1g_2f_A⟩+c_3|f_1f_2s_A⟩+c_4|f_1g_2s_A⟩+c_5|g_1f_2f_A⟩+c_6|g_1g_2f_A⟩+c_7|g_1f_2s_A⟩+c_8|g_1g_2s_A⟩,where c_i is the corresponding amplitude of probability obeying the normalization ∑_i |c_i|^2=1. Without loss of generality, we select c_1=1/6, c_2=√(2)/6, c_3=√(3)/6, c_4=1/3, c_5=√(5)/6, c_6=√(6)/6, c_7=√(7)/6, c_8=√(2)/3 for the present simulation.Figure <ref> shows how the deviation of the parameter influence the fidelity of the phase gate within the non-resonant (Fig.<ref>a) and resonant case (Fig.<ref>b). A deviation |δ g_1(2)|=10%g only causes a reduction smaller than 5% in the fidelity. It is apparent that the fidelity of phase gate is always higher than 95% under various deviations of the selected parameters. Thus our scheme is very robust against some errors which occurred in a practical case.Until now, we only consider the ideal case and various decoherence effects are not involved in the above discussions. The decoherence is induced by the decay of the cavities, and the relaxation of the qubits. Taking the decoherence into account, the whole system is determined by the master equationρ̇ = -i[H,ρ]+∑_i=1,2^2κ_iL[â_i]+∑_i=1,2,A∑_n=g,s,fγ_n,iL[σ^-_n,i],where L[â_i]=a_iρ a_i^†-a_i^† a_iρ/2-ρ a_i^† a_i/2, L[σ^-_n,i]=σ^-_n,iρσ^+_n,i-σ^+_n,iσ^-_n,iρ/2-ρσ^+_n,iσ^-_n,i/2, σ^-_n,i=|n_i⟩⟨ e_i|. κ_i is the photon decay rate of the ith cavity, γ_n,i is the energy relaxation rate of the jth qubit for the decay path |e⟩→|n⟩. We assume κ_i=κ and γ_n,i=γ for simplicity. The fidelity of the three-qubit controlled-phase gate implemented in the presence of the decoherence can be defined asF=⟨Ψ(0)|U_p^†ρ^'(t=τ)U_p|Ψ(0)⟩,where ρ^'(t) represents the temporal reduced density matrix (obtained by tracing out the cavity mode part). In Fig.<ref>, we plot the fidelity F versus the decays κ and γ. We can see that the fidelity is still larger than 70% for κ=γ=0.1g. In the non-resonant situation, the energy relaxation of the qubits is greatly suppressed due to the large detuning (Fig.<ref>a). The subspaces involved during the whole process include the excited states of the cavities, which greatly influence the fidelity of the phase gate. However, in the resonant case, the results are reverse. The energy relaxation of the qubits becomes the main decoherence source due to the resonant interaction, as shown in Fig.<ref>b. In real circuit QED system, strong coupling between superconducting qubit and resonatorcan be achieved with g=2π×360MHz <cit.>, and κ^-1=1μ s, γ^-1=25μ s <cit.>. With these parameters, it seems that the present scheme with a high fidelity larger than 95% could be feasible in an experiment. Furthermore, in the resonant case, the π phase gate can be realized only in 24 ns, which is much shorter than the time needed in previous scheme <cit.>. § GENERALIZATION TO N-QUBIT PHASE GATEWe note that the scheme can be generalized to realize N-qubit phase gate. The potential experiment configurations are depicted in Fig.<ref>. We assume that N-1 resonators each hosting a qudit are coupled to a common qudit A capacitively. The level configuration of all qudits is the same as the case in Fig.<ref>(b). Then the Hamiltonian readsH_N=H_N^1+H_N^2,withH_N^1=Δ∑_i=1^N-1 |e⟩_i ⟨ e|+Δ|e⟩_A⟨ e|+(Ω |s⟩_A ⟨ e|+h.c.), H_N^2=∑_i^N-1g_ia_i^†|g⟩_i⟨ e|+g_A∑_i=1^N-1a_i^†|g⟩_A⟨ e|+h.c.,where a_i(i=1,2,3,N-1) is the annihilation operator for photons in the resonator i and g_i is the coupling constant for qudit i associated with the corresponding quantized resonator modes. Without loss of generality, we choose g_i=g_A=g in the following calculation. To implement the N-qubit quantum phase gate, we here use the asymmetric encoding scheme. The logic states of qubit i(i=1,2,3,...,N-1) are represented by the state |f⟩ and |g⟩, while the logic states of qubit A are represented by |f⟩ and |s⟩. Under the condition g,Δ≫Ω, taking the similar procedure above, we can find that if and only if the N qubits are in the state |g_1g_2g_3...g_N-1s_A⟩, the system undergoes a phase shift exp(iΩ^2t/NΔ). Especially, in the resonant case, implementing the N-qubit π-phase gate requires time gt=10√(N)π.In order to test the effectiveness of our proposal, we consider specifically, for example, the case of 7 qubit. In Fig.<ref>, we plot the time-evolution behaviors of the real part (blue line) and imaginary part (red line) of the state |g_1g_2g_3g_4g_5g_6s_A⟩|0,0,0,0,0,0⟩_c under the evolution of the total Hamiltonian Eq.(<ref>). From figure.<ref>a, it is easily seen that at scaled time gt≈2200 the state |g_1g_2g_3g_4g_5g_6s_A⟩ acquires a π-phase shift, which agrees with our theoretical value 700π very well. In figure.<ref>b, we plot the time-evolution behaviors of the real part (orange line) of |g_1g_2g_3g_4g_5g_6s_A⟩|0,0,0,0,0,0⟩_c within resonant case. The time needed to complete the 7-qubit phase gate only requires 36 ns. The results match with the theoretical value very well. Therefore, our effective model is valid. § SUMMARYIn summary, we have proposed a scheme for implementation of the multiqubit tunable phase gate in a circuit QED setup where two resonators each hosting a qudit are coupled to a common qudit. Taking advantage of quantum Zeno dynamics and asymmetric encoding the logic state, the multiqubit tunable phase gate can be completed in only one step without individual addressing on each qudit during the whole process. Only a classical microwave pulse is needed, no matter how many qudits are involved. We have considered the our model under the non-resonant and resonant case. In non-resonant case, the tunable phase gate can be realized readily, while in resonant case a π-phase gate can be achieved after a full cycle of Rabi oscillation where the gate speed is much faster than that shown in previous schemes <cit.>. Moreover, the proposed scheme can be easily generalized to realize N-qubit phase gate. Discussion about the effect of possible experimental parameter errors on the fidelity of the entangled state are also given. Meanwhile, the influence of various decoherence processes such as the decay of the resonator modes, and the relaxation of the qudits is also investigated. Numerical results have shown a high fidelity to complete the phase gate.§ ACKNOWLEDGMENTS We greatly appreciate the support from the National Natural Science Foundation of China (NSFC)(No. 61604045).0 Shor P.W. Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” SIAM J. Sci. Statist. Comput. 26, 1484–1509 (1997). Grover L.K. Grover, “Quantum Computers Can Search Rapidly by Using Almost Any Transformation,” Phys. Rev. Lett. 80, 4329 (1998). 3 J. Chiaverini, D. Leibfried, T. Schaetz, M.D. Barrett, R.B. Blakestad, J. Britton, W.M. Itano, J.D. Jost, E. Knill, C. Langer, R. Ozeri, D.J. Wineland, “Realization of quantum error correction,” Nature (London) 432, 602 (2004). 4 M.S. Zubairy, A.B. Matsko, M.O. Scully, “Resonant enhancement of high-order optical nonlinearities based on atomic coherence,” Phys. Rev. A 65, 043804 (2002). 5 L. M. K. Vandersypen, M. Steffen, G. Breyta, C. S. Yannoni, M. H. Sherwood, and I. L. Chuang, “Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance,” Nature (London) 414, 883–887 (2001). 6 Y. S. Weinstein, M. A. Pravia, E. M. Fortunato, S. Lloyd, and D. G. Cory, “Implementation of the Quantum Fourier Transform,” Phys. Rev. Lett. 86, 1889 (2001). 7 M. D. Reed, L. DiCarlo, S. E. Nigg, L. Sun, L. Frunzio, S. M. Girvin and R. J. Schoelkopf, “Realization of three-qubit quantum error correction with superconducting circuits,” Nature (London) 482, 382-385 (2012). 8 L. Tornberg, M. Wallquist, G. Johansson, ,V. S. Shumeiko, and G. Wendin, “Implementation of the three-qubit phase-flip error correction code with superconducting qubits,” Phys. Rev. B 77, 214528 (2008). 9 D. G. Cory, M. D. Price, W. Maas, E. Knill, R. Laflamme, W. H. Zurek, T. F. Havel, and S. S. Somaroo, “Experimental Quantum Error Correction,” Phys. Rev. Lett. 81, 2152 (1998). 10 C.-Y. Chen and S.-H. Li, “Toffoli gate made from a single resonant interaction with a trapped ion system,” Eur. Phys. J. D 41, 557 (2007). 11 T. C. Ralph, K. J. Resch, and A. Gilchrist, “Efficient Toffoli gates using qudits,” Phys. Rev. A 75, 022313 (2007). 12 V. M. Stojanović, A. Fedorov, A. Wallraff, and C. Bruder, “Quantum-control approach to realizing a Toffoli gate in circuit QED,” Phys. Rev. B 85, 054504 (2012). 13 A. M. Chen, S. Y. Cho, and M. D. Kim, “Implementation of a three-qubit Toffoli gate in a single step,” Phys. Rev. A 85, 032326 (2012). 14 X. Q. Shao, T. Y. Zheng, and S. Zhang, “Robust Toffoli gate originating from Stark shifts,” J. Opt. Soc. Am. B 29, 1203–1207 (2012). 15 S. B. Zheng, “Implementation of Toffoli gates with a single asymmetric Heisenberg XY interaction,” Phys. Rev. A 87, 042318 (2013). 16 B. P. Lanyon et al., “Simplifying quantum logic using higher-dimensional Hilbert spaces,” Nature Physics 5, 134-140 (2009).17 T. Monz, K. Kim, W. Hänsel, M. Riebe, A. S. Villar, P. Schindler, M. Chwalla, M. Hennrich, and R. Blatt, “Realization of the Quantum Toffoli Gate with Trapped Ions,” Phys. Rev. Lett. 102, 040501 (2009). 18 A. Fedorov, L. Steffen, M. Baur, M. P. da Silva, and A. Wallraff, “Implementation of a Toffoli gate with superconducting circuits,” Nature (London) 481, 170 (2012). 19 A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter, “Elementary gates for quantum computation,” Phys. Rev. A 52, 3457 (1995). 20 C. P. Yang, and S. Han, “n-qubit-controlled phase gate with superconducting quantum-interference devices coupled to a resonator,” Phys. Rev. A 72, 032311 (2005). 21 L. M. Duan, B. Wang, and H. J. Kimble, “Robust quantum gates on neutral atoms with cavity-assisted photon scattering,” Phys. Rev. A 72, 032333 (2005). 22 A. Gábris and G. S. Agarwal, “Vacuum-induced Stark shifts for quantum logic using a collective system in a high-quality dispersive cavity,” Phys. Rev. A 71, 052316 (2005). 23 X. Zou, Y. Dong, and G. C. Guo, “Implementing a conditionalz gate by a combination of resonant interaction and quantum interference,” Phys. Rev. A 74, 032325 (2006). 24 Y. Q. Zhang, S. Zhang, K. H. Yeon, and S. C. Yu, “One-step implementation of a multiqubit controlled- phase gate with superconducting quantum interference devices coupled to a resonator,” J. Opt. Soc. Am. B 29, 300-304 (2012). 25 C. P. Yang, Y. X. Liu, and F. Nori, “Phase gate of one qubit simultaneously controlling n qubits in a cavity,” Phys. Rev. A 81, 062323 (2010). 26 C. P. Yang, S. B. Zheng, and F. Nori, “Multiqubit tunable phase gate of one qubit simultaneously controlling n qubits in a cavity,” Phys. Rev. A 82, 062326 (2010). 27 C. P. Yang, Q. P. Su, and J. M. Liu, “Proposal for realizing a multiqubit tunable phase gate of one qubit simultaneously controlling n target qubits using cavity QED,” Phys. Rev. A 86, 024301 (2012). 28 C. P. Yang, Q. P. Su, F. Y. Zhang, and S. B. Zheng, “Single-step implementation of a multiple-target-qubit controlled phase gate without need of classical pulses,” Opt. Lett. 39, 3312 (2014). 29 W. A. Li and G. Y. Huang, “Deterministic generation of a three-dimensional entangled state via quantum Zeno dynamics,” Phys. Rev. A 83, 022322 (2011). 30 X. Q. Shao, L. Chen, S. Zhang, and K. H. Yeon, “Fast CNOT gate via quantum Zeno dynamics,” J. Phys. B: At. Mol. Opt. Phys. 42, 165507 (2009). 31 X. Q. Shao, H. F. Wang, L. Chen, S. Zhang, Y. F. Zhao, and K. H. Yeon, “One-step implementation of the 1 → 3 orbital state quantum cloning machine via quantum Zeno dynamics,” Phys. Rev. A 80, 062323 (2009). 32 A. Beige, D. Braun, B. Tregenna, and P. L. Knight, “Quantum Computing Using Dissipation to Remain in a Decoherence-Free Subspace,” Phys. Rev. Lett. 85, 1762 (2000). 33 J. D. Franson, B. C. Jacobs, and T. B. Pittman, “Quantum computing using single photons and the Zeno effect,” Phys. Rev. A 70, 062302 (2004). flux1 V. E. Manucharyan, J. Koch, L. I. Glazman, and M. H. Devoret, “Fluxonium: Single Cooper-Pair Circuit Free of Charge Offsets,” Science 326, 113-116 (2009). flux2 G. Zhu, D. G. Ferguson, V. E. Manucharyan, and J. Koch, “Circuit QED with fluxonium qubits: Theory of the dispersive regime,” Phys. Rev. B 87, 024510 (2013). kim M. D. Kim and J. Kim, “Coupling qubits in circuit-QED cavities connected by a bridge qubit,” Phys. Rev. A 93, 012321 (2016). 34 D. I. Schuster, A. P. Sears, E. Ginossar, L. DiCarlo, L. Frunzio, J. J. L. Morton, H. Wu, G. A. D. Briggs, B. B. Buckley, D. D. Awschalom, and R. J. Schoelkopf, “High-Cooperativity Coupling of Electron-Spin Ensembles to Superconducting Cavities,” Phys. Rev. Lett. 105, 140501 (2010). 35 R. Barends, J. Kelly, A. Megrant, D. Sank, E. Jeffrey, Y. Chen, Y. Yin, B. Chiaro, J. Mutus, C. Neill, P. O’Malley, P. Roushan, J. Wenner, T. C. White, A. N. Cleland, and John M. Martinis, “Coherent Josephson Qubit Suitable for Scalable Quantum Integrated Circuits,” Phys. Rev. Lett. 111, 080502 (2013).
http://arxiv.org/abs/1702.08448v2
{ "authors": [ "Wen-An Li", "Yuan Chen" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170227041457", "title": "Simplify proposal for realizing multiqubit tunable phase gate in circuit QED" }
Approximate Convex Hulls]Approximate Convex Hulls: sketching the convex hull using curvature Department of Mathematics and Statistics, McGill University.This work was supported by an NSERC Engage grant 2015 Convex hulls are fundamental objects in computational geometry.In moderate dimensions or for large numbers of vertices, computing the convex hull can beimpractical due to the computational complexity of convex hull algorithms.In this article we approximate the convex hull in using a scalable algorithm which finds high curvature vertices with high probability. The algorithm is particularly effective for approximating convex hulls which have a relatively small number of extreme points. [ Adam M. Oberman December 30, 2023 ===================== § INTRODUCTION Computing the convex hull of points is a fundamental problem in the field of computational geometry <cit.>. In moderate dimensions or for large numbers of vertices, computing the exact convex hull can be computationally impractical.Even the vertex redundancy problem, which computes the extreme points without the full geometric structure of the convex hull is impractical Even relatively modern algorithms <cit.> <cit.> break down or are too expensive for high dimensional problems.For practical purposes, in moderate dimensions, the computational obstruction is not with the algorithm, but with the convex hull itself.However, in moderate or high dimensions, the combinatorial structure of high dimensional data sets can be very different from the geometrical intuition obtained from studying low dimensional simplices <cit.> <cit.>.Nevertheless, there are many practical problems which are well represented by sampling a large number of high dimensional points, where we expect that either: (i) the number of extreme points is small or, more generally, (ii) the data is contained in a set which is the convex hull of a small number of points. The second case applies, for example, to data sampled uniformly from a simplex in high dimensions.This problem occurs hyperspectral data analysis <cit.> <cit.> <cit.>.In figure <ref>, we give an illustration of this problem, and of our algorithm.Here 200 points were sampled uniformly from the 2 dimensional simplex. We used the hyperplane compression algorithm to approximate the original simplex.Our work is motivated by an application in data reduction.The algorithm was used to reduce the number of points in a helicopter flight test of Bell Helicopter Model 505 Jet Ranger X <cit.>. Approximately twenty million load vectors were recorded during the helicopter certification flight test. The goal was to extract a small number of extremal loads to be applied on a flight representative fatigue test of the helicopter tailboom assembly. The exact convex hull contained approximately two thousand points. The algorithm identified approximately 200 high curvature points. These points were clustered into a smaller number of points, and finally, using strain response of the tailboom to the load vectors, six load vectors were selected for the fatigue test. This robust data reduction method proved to be effective in identifying extremal loads and was instrumental for the timely certification of the recently released Bell Helicopter 505 Jet Ranger X.The key idea in our approach is geometrically intuitive. Suppose we have a finite collection of points X⊂ℝ^n for which we want to approximate the convex hull, denoted CH(X). Given any unit vector d∈ S^n-1, any extreme pointx_d ∈{x^⊺ d|x∈ X} is a vertex of CH(X), moreover x^⊺ d ≤ x_d^⊺ d defines asupporting hyperplane. Thus if we perform such computations for a large number of unit vectors, we obtain both a collection of vertices whose hull is contained in CH(X) and a collection of hyperplanes whose intersection contains CH(X). These determine inner and outer approximations to CH(X). However, if we return to the example of points sampled from a simplex, near a vertex, there can be a high number of extreme points. Our goal is to reduce the number of extreme points without introducing too large of an error in the convex hull. The influential Pixel Purity Index (PPI) algorithm <cit.> keeps only vertices which are the extremal for more unit vectors, d. In this article, we justify and refine the algorithm using high dimensional curvature concepts.The connection with curvature has been hinted at in <cit.>, where it was observed that points which maximize many direction vectors are “presumed to be closer to the “corners" of the data".As far as we know, we are the first to make this explicit.Using this observation we can prove consistency <ref> and convergences <ref> of the algorithm. As an aside we give a theoretical answer to a question posed in <cit.>. We extend the algorithm by also giving a method for reducing the number of hyperplanes <ref>. We also briefly touch on the endmember detection problem: given a collection of points uniformly sampled from some polytope, how do we find the corners of the polytope (as opposed to the corners of the convex hull of the points).Our paper is structured as follows. In Section 2 we explain how our method computes approximate curvature. Section 3 describes how to compute the error in our approximations. Section 4 describes our algorithm and its extensions. Section 5 usesthe results of Section 2 and 3 to show our algorithm is consistent and converges. Finally in section 6 we give some examples. Note there is a large body of work on approximately convex bodies, see for example <cit.> and <cit.>.There the idea is to, for example, find the best ellipsoid inside or containing a convex body, where a scaling factor is allowed.The goal of this work it to select extreme vertices in a computationally practical fashion, which is somewhat different from those works. § CONSISTENCY OF THE CURVATURE APPROXIMATION Here we define the basics and show how one can compute the curvature of polytopes. This is the essence of our algorithm defined in a later section.A supporting hyperplane at vfor the convex set P is a hyperplane with normal n such thatn^⊺ y ≤ n^⊺ v,for all y∈ P.Let S^n-1 be the unit sphere in Let N_v⊂ S^n-1 be the set of all unit normals of supporting hyperplanes at v for the convex set P . We view the normal vectors as being points on the sphere. The curvature of P at v is the spherical volume of N_v. The relative curvature of P at v is K(v) = (N_v) /(S^n-1).In two dimensions the curvature is a measure of the exterior angle at a point. In general, curvature is viewed as a n dimensional notion of spherical angle. An introduction to curvature of polytopes can be found in <cit.>. We note this definition applies just as well to any convex body <cit.>. For smooth bodies, the curvature of a subset V of a convex body is ( ⋃_v∈ V N_v). We can now show how to approximately measure the curvature of polytopes. Let D ⊂ S^n-1 be finite and let V ⊂ also be finite. Let D_v := { d∈ D |v^⊺ d≥ w^⊺ d for all w ∈ V} be the set of extremal directions in D for v, and let CH_D(V) = { v ∈ V |D_v > 0 }be the D-convex hull of V.Define the relative D-curvature of S at a D-extremal point v to beK_D(v) = D_v/D LetD be a finite set of vectors uniformly sampled from S^n-1. Then for any ϵ >0(K_D(v) - K(v)>ϵ ) ≤K(v)(1-K(v))/Dϵ ^2First note that since every direction vector is maximized by some vertex of a polytope we know the sets { N_v | v∈ V } cover the sphere. On the other hand for w≠ v, N_v ∩ N_w has measure zero.This follows since for a given normal vector in N_v ∩ N_w, the corresponding supporting hyperplane contains the line segment between v and w. Consequently the dimension of this set must have measure zero. Hence, any d_i has probability K(v) of satisfying d_i ∈ N_v.Moreover, for each d_i belongs to N_v for some v (or with probability 0 it is maximized by more than one). Therefore D_v follows a multinomial distribution; every d_i has probability K(v) of belonging to N_v. Now, given that K_D (v) = D_v/D, Chebyshev's inequality states that ( | K_D(v) - E(K_D(v)) | >ϵ ) ≤var( K_D(v))/ϵ ^2and since D_v follows a multinomial distribution E(K_D(v))=K(v) and var(K_D(v)) = K(v) (1- K(v))/D. The result follows immediately.Figure <ref> shows the convergence of K_D (v) for a given vertex as |D| increases.§ SPARSE APPROXIMATION OF POLYTOPES Here we show that removing low curvature extreme points from a polytope does not significantly change its shape (more precisely stated below). The main result is related to the Aleksandrov's maximum principle <cit.> Let d(S,S') denote the Hausdorff distance between two setsd(S,S') ≡max{sup_x∈ S d(x,S'), sup_y∈ S' d(y,S) } We need a simple lemma about the Hausdorff distance between two polytopes.Let S = CH({v_1,...v_m }) and let S' be convex and compact then sup_x∈ S d(x,S') = sup_1≤ i ≤ m d(v_i,S')Let W be a point in S such that d := d(W,S') is maximal. Let A be a point in S' such that dist(W,A)= d (which exists by compactness).Consider the hyperplane ℋ at A with normal WA. We claim this is a supporting hyperplane at A with respect to the polytope S'. Suppose otherwise, then there exists a point B∈ S' that belongs to the same side of the hyperplane as W. But then the line AB belongs to S' and this line will intersect the ball of radius d centered at W and so there is a point in S' that is closer to W then A: contradiction. The following figure makes this clear (the circle pictured is of radius d). < g r a p h i c s > On the other hand there must be some w_i on or above (i.e. not on the same side as A) the hyperplane at W with normal WA since w_i are the extreme points of S. If w_i is on the hyperplane then d(w_i,S')≥ d and we are done. Otherwise, if it is above, then since d(w_i,S')≤ d there exists a point A_0 in S' such that d(W,A_0) ≤ d and so A_0 is on the wrong side of ℋ giving the above contradiction. Recall for an angle θ∈ [0,π ] and v∈ S^n-1 a spherical cap with angle theta about v is the set {w∈ S^n-1| w^⊺ v ≥cos (θ ) } Let S_cap(θ) denotes the volume of the n dimensional spherical cap with angle θ (about any v since this will not change the volume). .Let θ∈ [0,π ], then in S^n-1 1/2( sin( θ/2 ))^n-1≤ S_cap (θ) A straightforward rewriting of<cit.> Let B_r denote the ball of radius r in .Let V = {v_1,...v_m } and W = {w_1,...,w_k,} be subsets ofB_r. Let S = CH(V ∪ W) and S' = CH(V), suppose neither are degenerate.Let ω be the sum of the curvatures of all the points w ∈ W. Then S_cap(arcsin(d(S,S')/2r))≤ωand d(S,S') ≤√(2)π r(2ω)^1/n-1 The result above gives an upper bound on the distance of order ( ω (S^n-1))^1/n-1 where ω is the total relative curvature removed from the set. In Figure <ref> we plot this function for n=2,3,4,5 and r = 1. Note since 2 is a trivial worst case this shows how the estimate is only useful for small ω Let d = d(S,S'). First we show how to obtain (<ref>) from (<ref>). From lemma <ref>1/2( sin( arcsin(d/2r)/2) )^n-1≤ S_cap(arcsin( d/2r) ) ≤ω hence: sin( arcsin(d/2r)/2) ≤ 2 (2ω)^1/n-1Finally we use the inequalities sin( arcsin(d/2r)/2) ≥4/√(2)π(arcsin(d/2r)/2)and 4/√(2)πarcsin( d/2r) ≥4/√(2)π(d/2r) to complete the proof of this step.Next we prove (<ref>).First we show it suffices to consider the case where k=1.Suppose we proved the theorem for k=1. Let w_i be the point in S furthest away from conv({v_1 ,...v_m}) which exists by Lemma <ref>. But by assumption, S_cap(arcsin(d(w_i,S')/2r) ) ≤ω'where ω' is the curvature of w_i with respect to S”:=conv(w_i,v_1,...v_n). Thus it would be enough to show ω' ≤ω. But note every supporting hyperplane for some w_j with respect to S will either be a supporting hyperplane for some v_ℓ or for w_i with respect to S”, on the other hand any supporting hyperplane for some v_j with respect to S will remain a supporting hyperplane of v_j with respect to S”. From this we conclude that ω' ≤ω as required. It remains to show the case k=1. Rename w_1 to W for clarity, so S=conv(W,v_1,...v_n), note d = d(W,S').Let A be the unique point in S' such that dist(A,W)=d (which exists by compactness of convex hulls). Letℋ be the hyperplane at A with normal WA. Note that exactly as in the proof of<ref> ℋ is a supporting hyperplane for S'Consider the cone C_0 consisting of vertex W and base S∩ℋ. In other words the portion of S on the same side of the hyperplane as W. Let C_1 be the cone consisting of vertex W and base the ball of radius√( (2r)^2 - d^2) centered at A inℋ. We claim C_0⊂ C_1, it suffices to show S∩ℋ is contained in the ball around A. Note by assumption all points of S are in a ball of radius r, in particular all points ofS∩ℋ are within 2r of W. Moreover the slant of C_1 is length 2r. So if any point in S∩ℋ were outside the ball around A it would be a distance >2r from W: contradiction.Finally, since C_0⊂ C_1 it follows that the curvature of W with respect to C_0 is less then or equal to the curvature of W with respect to C_1 but this is precisely:S_cap(arcsin ( d/2r)) ≤ω § THE ALGORITHMAlgorithm <ref> is our basic approximate convex hull algorithm. In short, we generate many direction vectors and use these to compute curvature as described above. We then keep only the high curvature points.Suppose we run algorithm <ref> on sets V and D as above. Let V' be all the vectors that were kept. We call CH(V') the inner hull. Now for each d∈ D let v_d∈ V be a vector that maximizes the dot product. Consider the collection of linear constraints d^⊺ x ≤ d^⊺ v_dThis determines a convex body we call the outer hull. Note the inner hull is contained in the actual convex hull of V, which is contained in the outer hull. Our algorithm also gives the constraints for the outer hull. In step 2 one simply needs to keep track of the value d ^⊺ v_d (i.e a_v,d) for each d∈ D when computing the maximums. There is a possibility the constraints will not define a finite polytope if |D| is too small but this is exceedingly unlikely for large|D|. An even larger concern is that the outer hull contains a large number of constraints, we will attempt to remedy this later. We claim that both the inner and outer hull approach the actual convex hull if one increases the number of direction vectors. Thus we get an approximate convex hull in the vertex format and another approximate convex hull in the constraint format. The following notion of error will help make this precise.Under the same assumptions as the previous definition let A be the inner hull, B the actual hull and C the outer hull. We define the inner error as sup_x∈ B d(x,A) and the outer error as sup_x∈ C d(x,B)In both cases these are simply the Hausdorff distance defined above.In <cit.> we find an elegant method for producing direction vectors (called skewers). Not only does this speed up the production of direction vectors but more importantly it speeds up the computation of the dot products in the algorithm.We are confident choosing vectors uniformly from the sphere works (Since it gives a precise measure of the curvature as we saw above). However alternative methods seem to work just as well experimentally (intuitively they are `uniform enough'). In <cit.> they have tested various methods similar to <cit.> and have singled out what they found to be the best approach. §.§ Sparse ApproximationFor many applications the above algorithm finds too many extreme vectors in the inner hull, and for almost any application the outer hull has far too many hyperplanes (there will be one hyperplane for each direction vectors). In this section we discuss how to deal with this problem, in particular how to reduce the following two ratios: Let V ⊂ℝ^n be a finite collection of points. Suppose we have an algorithm A that outputs vertices of a convex hull A_CH that approximates CH(V). Then the vertex compression ratio of algorithm A isNumber of vertices inA_CH/Number of vertices in CH(V)On the other hand suppose A outputs hyperplanes of a convex hull A_CH that approximates CH(V). Then the hyperplane compression ratio of algorithm A is Number of hyperplanes inA_CH/Number of hyerplanes inCH(V) To reduces the vertex compression ratio of algorithm <ref>we run algorithm <ref>. We simply find vectors that are clustered together and keep only one of them.Let V,V',V”,β be as in algorithm <ref> then d(CH(V'),CH(V”)) < βThis is an immediate application of Lemma<ref>Next we show how to reduce the hyperplane compression ration of our original algorithm. Let V,D be as above, we start by running our original algorithm followed by the vertex compression algorithm. Suppose when running our original algorithm that for each v∈ V' we keep track of D_v := { d∈ D |v^⊺ d≥ w^⊺ d for all w ∈ V} moreover suppose when running the vertex compression algorithm that for each v∈ V” we keep track of E_v := { w ∈ V' | wwas removed during the step involvingv }∪{v}(i.e. E_v is all elements in V' that were clustered around v). For v∈ V” let F_v := ∪_w∈ E_v D_v See algorithm <ref> below. If desired, algorithm <ref> works without running the vertex compression algorithm (i.e. if β = 0). If one doesn't run the vertex compression algorithm and finds the true convex hull of the projected F_v then the result is equivalent to the outer hull found originally (this merely removes redundant hyperplanes) We end this section by noting that hyperplane compression algorithm is a potential solution to the problem of finding endmembers (the vertices of the outputed hull are potential endmembers). This holds even without a pure pixel in the data set. In particular if the data is uniformly generated from some polytope and we wish to recover the polytope this method can be used. § APPLICATION OF THE CURVATURE ESTIMATE TO THE ALGORITHMWe can use the consistency result, Theorem <ref>, to show convergence of algorithm <ref>.Let V ⊂ℝ^n and let D be uniformly sampled from S^n-1.Assume that the C(v) ≥ω > 0 for all extremal points in V. ThenCH_DV = CH(V),with probability P ≥ 1-pprovided D≥log (ω p)/log(1-ω).We have seen from the proof above that the probability of not finding a point with relative curvature ω is (1-ω)^D.Since there are at most 1/ω such points we have by subadditivity of probabilities that the probability of missing one of them is ≤1/ω (1-ω)^D (for an exact answer use inclusion-exclusion) Suppose p is less then or equal to this: p≤1/ω (1-ω)^D. This is trivially equivalent to D≥log (ω p)/log(1-ω) as required The function log (ω p)/log(1-ω) in the estimate above is O(1/ω) as case be seen in Figure <ref>.Another consistency result is as follows.Let V ⊂ℝ^n. Let D be sampled uniformly from S^n-1.LetO_k and I_k be the outer and inner error respectively. Then {O_k }_k∈ℕ and {I_k }_k∈ℕ are non increasing sequences and with probability 1 they both converge to zero. It is clear that {O_k }_k∈ℕ and {I_k }_k∈ℕ are non increasing sequences since each new direction vector adds one more constraint (possibly lowering O_k and may or may not find a new extreme vector (possibly lowering I_k).It is also clear from above that I_k approaches 0. Indeed each extreme vector has non zero curvature and hence a non zero probability of being found so for large enough k we expect I_k = 0 with increasing probability.Finally we can use theorem <ref> to show with probability 1 that O_k converges to zero. As before we know that with probability 1 all extreme points will be found. The extreme points of the outer hulls are made up of V and some other vectors. For each extreme point v∈ V, we claim the curvature of v with respect to the actual hull CH(V) approaches the curvature of v with respect to the outer hull as k→∞ with probability 1. To show this let E be the set of all sequences (d_1,d_2,...) ∈ (S^n-1)^∞ such that the claim fails. We wish to show the measure of E is zero. Let N_v ⊂ S^n-1 be the collection of direction vectors for which v is extremal and let { d_1 ',d_2 ' ... } be the subsequence of d_i that belong to N_v. Now if each point in N_v is a limit point of ∪_n CH({ d_1 ' ,...d_n '} ) the claim clearly follows. Therefore for each sequence in E there exists a rational point q∈ N_v and a rational r>0 such that B_r(a)∩∪_n ({ d_1 ',...d_n '} ) = ∅. Let E_a,r be all sequences for which this holds, now since N_v ∩ B_r(a) has positive measure then by definition of product measure the measure of E_a,r is zero. Since E belongs to the union of all E_a,r the claim follows from countable subadditivity of measure. Now since the sum of the curvatures of each extreme point of any polytope add up to (S^n-1) the claim implies that all extreme points of the outer hulls that are not in V must have a vanishing proportion of the total curvature as k→∞. Theorem<ref> completes the proof.It is easy to compute some controls on the convergence of the inner error. Equivalenty this gives a worst case calculation for choosing the appropriate number ofdirection vectors to achieve a desired error. This is a theoretical answer to a problem raised in <cit.>, how to choose the number of direction vectors for PPI algorithms. It's worth mentioning the practical solutions from <cit.> for choosing the number of direction vectors. Essentially they suggest computing the maximum for only a small block of direction vectors and then repeating this process until no new extreme vectors are found. This way there is no need for human input about the choice of how many direction vectors to use. Number of direction vectors needed for a given inner error (simple messy bound) We can use the above to find the number of direction vectors needed to achieve aparticular inner error ϵ >0. Let V be a set of points in ball of radius r and let X be the true extreme points of (V). Suppose we run our algorithm with some set of direction vectors uniformly chosen and we keep all extreme points found.The error depends on the total amount of curvature of all the extreme points we've missed. Using <ref> we can ensure with probability ≥ 1-p that we have all points of relative curvature ≥ω for any 0≤ω≤ 1. Then the total missing curvature would be ≤ |X|ω(S^n-1). By <ref> this will give an error ≤√(2)π r (2|X|ω(S^n-1) )^1/n-1 . We can set this error ≤ϵ and solve for ω to get ω≤(ϵ/√(2)π r)^n-1/ 2|X|((S^n-1) )Denote the right hand side by C. From the result above to achieve this ω we would require log (Cp) /log (1-C)direction vectors. To sum up, if the number of directions is more than log (Cp) / log(1-C) then with probability 1-p the error is bounded by ϵ. The growth of this is comparable to (S^n-1)/(ϵ/√(2)π r)^n-1. This unfortunately gives results that are very large. For example keeping p=.05,ϵ = .1, r=1,|X|=10000,and varying the dimension n = 3,4,5,6,7 we get roughly 10^10,10^11,10^13,10^15,10^17 respectively. Improvement on previous remark The previous remark doesn't relate to the inner error we expect in practice. For one thing, of course we don't claim the constants found above are in any way optimal but more importantly if we remove many low curvature points we don't expect this to be equivalent to the worst case where all the curvature is concentrated in one point. This is especially true since we are taking our directions uniformly from the sphere. In many cases if we have found all points with relative curvature ≥ω then removing all other points results in an error closer to ≤√(2)π r (2 ω(S^n-1) ) ^1/n-1 (the curvatures don't add up). This means we can set |X|=1 in the calculation above. So if we consider as beforep=.05,ϵ = .1, r=1, and vary the dimension n = 3,4,5,6,7 we get roughly 10^5,10^7,10^9,10^11,10^13 respectively.§ COMPUTATIONAL EXAMPLES §.§ Synthetic dataIn this section we generate synthetic data to test the algorithm. For points generated uniformly from a simplex most curvature is in the corners.So already a random uniform direction vector will likely find a corner. Our procedure is effective in this case.Consider Figure <ref>. This is 1000 direction vectors and a million points. The algorithm found 61 extreme vectors (unfilled dots) and kept 34(filled dots). The actual convex hull has around 300 extreme points. Clearly clustering (algorithm <ref> would be very effective in this case.On the other hand for a million points generated uniformly from the sphere we do very poorly (Figure <ref>). Here all the direction vectors found different extreme points. Only 102 were kept. (This can be slightly rectified by modifying step 4 at the expense of worse performance on examples with sharp corners).Figure <ref> is an example of a mix of low curvature points with one very high curvature point. In practice much data is such a mix, so we expect high performance in some region and low performance in others.Figure <ref> studies the accuracy of the algorithm as a function of the number of direction vectors. We generated 10 thousand 3 dimensional points and applied the algorithm with an increasing number of direction vectors, keeping every extreme point we found. The figure shows how the outer error (yellow) and the inner error (blue) decrease as the number of found extreme vectors increases. The points in the first two images were generated randomly from the cube and from the sphere. The points in the thirdrandomly generated points from a simplex, and the we applied a fixed linear transformation. (Note for the sphere the outer error is only computed to the first digit due to the large number of computations involved).Figure <ref>(d) shows a similar computation of the inner error using 1 million points in 3 dimensions. This time the error is compared to the number of extreme vectors found. §.§ Algorithm Applied to Helicopter Flight Test data set.In this section, we demonstrate the results of the vertex compression algorithm on a real dataset consisting of 23 million data points in dimension 5. We used 70000 direction vectorsIn Figure <ref>(a) we compare the 1491 found points (red filled) with the 2495 points on the true convex hull. There is an error of around 300, for context the width of the shape is around 66000. Moreover the average distance from the true extreme points to the mean of the extreme points is 32000. The figure has been projected into 2 dimensions.Afterrunning the vertex compression algorithm we were left with 29 points <ref>(b). The error here is 6800, for context recall the information above. alpha
http://arxiv.org/abs/1703.01350v2
{ "authors": [ "Robert Graham", "Adam M. Oberman" ], "categories": [ "cs.CG", "math.CO", "Primary: 52-04, Secondary: 52A41, 52A20, 65Y20, 53C45" ], "primary_category": "cs.CG", "published": "20170227222557", "title": "Approximate Convex Hulls: sketching the convex hull using curvature" }
To balance the load and to discourage the free-riding in peer-to-peer (P2P) networks, many incentive mechanisms and policies have been proposed in recent years. Global peer ranking is one such mechanism. In this mechanism, peers are ranked based on a metric called contribution index. Contribution index is defined in such a manner that peers are motivated to share theresources in the network. Fairness in the terms of upload to download ratio in each peer can be achieved by this method. However, calculation of contribution index isnot trivial. It is computed distributively and iterativelyin the entire network and requires strict clock synchronization among the peers. A very small error in clock synchronization may lead to wrong results. Furthermore, iterative calculation requires a lot of message overhead and storage capacity, which makes its implementation more complex. In this paper, we are proposing a simple incentive mechanism based on the contributions of peers, which can balance the upload and download amount of resources in each peer. It does not require iterative calculation, therefore, can be implemented with lesser message overhead and storage capacity without requiring strict clock synchronization. This approach is efficient as there are very less rejections among the cooperative peers. It can be implemented in a truly distributed fashion with O(N) time complexity per peer.P2P network, free-rider, DHT. Simplified Biased Contribution Index (SBCI): A Mechanism to Make P2P Network Fair and Efficient for Resource Sharing Sateesh Kumar Awasthi, Yatindra Nath Singh, Senior Member, IEEE,Sateesh Kumar Awasthiis with the Department of Electrical Engineering, Indian Institute of Technology, Kanpur, India, e-mail: (sateesh@iitk.ac.in) Yatindra Nath Singhis with the Department of Electrical Engineering, Indian Institute of Technology, Kanpur, India, e-mail: (ynsingh@iitk.ac.in)============================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION§ INTRODUCTION Peer-to-peer (P2P) networks gained a significant popularity in the last decade and now responsible for a large fraction of internet traffic <cit.>, <cit.>. The popularity of these networks is due to their inherent advantages over traditional client-server model, e.g., the diversity of available data, scalability, robustnessand cost effectiveness. The initial setup cost for these networks is very small because costly central servers are not needed. However, lack of central control leads to the problem of unfairness in these networks, i.e., large difference between upload and download amount at any peer. In such a situation, many peers free-ride and contribute very less or nothing which results in slow downloads for other peers <cit.>. Therefore, designing and implementing an efficient incentive policy to motivate the peers to share the resourcesbecomes important.In recent years, many incentive policies havebeen proposed to maintain the fairness in P2P networks <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. In these policies, peers' cooperative behavior in the network is evaluated and resources are given to them in proportion to their cooperation.In <cit.>, <cit.>, <cit.>, peers' cooperation is evaluated locally, i.e., peer cooperate with only those peers who had cooperated with them in the past. To start the process of sharing, a small amount of data is given to every peer. In such scenario, free-riders can always find a new peer to download their desired data. Also, the cooperative peers are not allowedto download more than this small amount of data from a new peer even though they have uploaded the large amount of data to some other peers<cit.>.In <cit.>, <cit.>, <cit.>, peers' cooperative behavior in the entire network is taken into consideration. For this purpose, in <cit.>, every peer keeps the record of each transaction which has happened in the entire network. It makes the implementation of algorithm very complex. In comparison to this,<cit.>, <cit.> are simpler approaches. In these approaches, peers are ranked in the entire network. The rank of the peer is determined by the contribution index. It is estimated using two factors, resources contributed by the peer in the network and contribution index of peer with whom it is transacting. Estimation of contribution index is performed by iterative methods and can be implemented in a distributed fashion. These approaches are able to balance the amount of upload and download of resources in the network. However, there are some fundamental problems in its implementation.First, in each iteration, index managers, i.e., peers who are managing the contribution index of other peers, need the current contribution index of peers from other peers. If clocks of the peers are not synchronized, then the peers who are reporting the contribution index of peers may report the contribution index of the previous iteration,which may lead to the wrong estimates<cit.>. Second, updating the contribution index in each iteration requires a lot of message overhead. This is more important when the number of iterations required to converge the algorithm is large. If new transactions happen in the network, then contribution index need to be updated. Evenone transaction, between any two peers, can affect the contribution index of all the peers in the network.And lastly, index managers need to keep the record of all past transactions of a peer for whom they are estimating thecontribution index. This needs a large amount of storage capacity. Keeping all these points in view, a simple incentive policy is required, which can ensure the following: * It should balance the upload and download amount of resources at each peer.* There must be minimum rejections among the cooperative peers. * Cooperation of peers must be considered in the entire network.* Lower message overhead and storage capacity is desirable.* It should be robust to peer dynamics.* It should be implementable in truly distributed system.In this paper, we are proposing an incentive policy, which considerspeers' cooperation in the entire network. We are assigning the contribution index to each peer. It is a simplified form of the Biased Contribution Index (BCI) <cit.>. We call it Simplified Biased Contribution Index (SBCI). It also depends on the cooperation of peers in sharing the resources and inbalancingthe load in the network. SBCI is updated at regular time intervals. At any time, SBCI is calculatedusing previous SBCI and the cooperation made by the peers during this period, i.e., in between previous update to current update. In the estimation of SBCI, no iterative calculation is required, hence it automatically solves the first and secondproblems. Once the peers'cooperation is modeled in terms of SBCI, it need not store the history of peers' transactions, hence, it also solves the last problem. Our simulation results show that SBCI can balance the upload and download amount at each peer with minimum rejections among cooperative peers. Hence it meets all the above design considerations.Rest of the paper is organized as follows. Section <ref> covers the summary of related work. The proposed incentive model is introduced in Section <ref>. Section <ref> covers the analysis of algorithm. The transaction procedure for maximum efficiency is introduced in Section <ref>. Evaluation of algorithm, through simulation is discussed in Section <ref>. Finally, paper is concluded inSection <ref>. § RELATED WORKPresence of free-riding peers and its impact on fairness in P2P network have been studied earlier also <cit.>, <cit.>. Several approaches have been proposed by the research community <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>,<cit.>, <cit.>, <cit.>, <cit.>, <cit.>. BitTorrent <cit.>, a most popular file sharing system, used tit-for-tat (TFT) approach to prevent the free-riding. In this approach, apeer cooperates with other peers in the same proportion as they have cooperated with him in the previous round. In each round, every peer updates the contributions of peers in the previous round. To improve the performance, many variants of TFT have been proposed. Garbacki et al., <cit.>, proposed ATFTin which bandwidth is used rather than content to decide the incentives. Dave et al.,<cit.>, proposed auction based modelto improve the TFT. In this model, peers reward one another with proportional shares, <cit.>, of bandwidth. Sherman et al., <cit.>, proposed FairTorrent. It is a deficit based distributed algorithm in which a peer uploads the next data block to the peer, whom it owes the most data as measured by a deficit counter. In Give-to-get <cit.>, peer ranks all its neighbors,based on the amount of data what have been received from them in the last round and then unchokes the top three forwarders. All these mechanisms consider the local and very short history of peers' cooperation. Global history of peers' cooperation is considered in <cit.>, <cit.>, <cit.>, <cit.>. In multilevel tit-for-tat (ML-TFT) <cit.>, a peer ranks other peers based on the fraction of download, what he received from them. Its time complexity is much larger for n-step ranking of peers. Feldman et al., <cit.>, proposed a robust incentive technique, which considers the peers' cooperation in the entire network, but it is not trivial to implement in a large network. Its calculation have complexity of O(N^3). In Global Contribution (GC) approach <cit.>, a peers' GC point is defined such that all peers are motivated to download from low contributing peers and upload to high contributing peers. GC point is calculated using iterative methods such as the Jacobi and Gauss-Seidel. In another similar approach, Biased Contribution Index(BCI) <cit.>, second order iterative function is used to calculate the BCI of peers. BCI is defined as monotonically increasing function of biased upload to download ratio. Convergence of BCI <cit.> is faster than GC <cit.>. Many authors proposedapproaches based on game theory<cit.>,<cit.>, <cit.>, <cit.>. Free-riding can be reduced by this approach. This approach is based on the assumption that the rules of the game are known to all the players. For practically large network,this may not be true for all the peers. Reputation management system, <cit.>, <cit.>, <cit.>, <cit.>, isanother approach in which peers' behavior is modeled astrust. Trust is estimated by each peer based on its interaction with the other peers and then it is aggregated in the whole network. Trust is a more generalized term and depends on the overall behavior of peer in the network. In the proposed SBCI, we are focusing on the particular issues of fairness and free-riding. § PROPOSED INCENTIVE MODEL§.§ Design Rules to Ensure the Fair and Efficient P2P Network Let us make some design rules to ensure the design considerations mentioned in Section <ref>. 1). If any peer only downloads the resources from the networkthen its SBCI must be zero. 2). If it only uploads to the network ( at least once to other than free-rider) then its SBCI must be 1. 3). Uploading to the free-riders should not increase the SBCI. 4). Uploading to any other peer should always increase the SBCI. 5). Download should always decrease the SBCI. 6). Peers must be motivated to upload to high contributing peers. 7). Peers must be motivated to download from low contributing peers.§.§ Simplified Biased Contribution Index Let there be N peers in a P2P network. Further, we considered time evolution in discrete instances. A time instance is represented by t_n, and if an event happened in the time interval, (t_n-1, t_n], it is considered to happen at t_n. At any time, t_n, let the share matrix in the entire network be 𝐒(𝐭_𝐧). Where its ij element is the amount of resource shared by peer i to peer j at time t_n, i.e., in (t_n-1, t_n]. The bias ratio, R_i(t_n), for peer i at time t_n can be defined in thesimilar way as in <cit.>. R_i(t_n)=𝐞_𝐢.𝐒(𝐭_𝐧).𝐱(𝐭_𝐧)/𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧).𝐱(𝐭_𝐧) Here, 𝐱(𝐭_𝐧) is the SBCI vector of peers at time t_n. 𝐒^𝐭𝐫(𝐭_𝐧) is transpose of matrix 𝐒(𝐭_𝐧) and 𝐞_𝐢 is a row vector with its i^th entry as 1 and all others as zero. Now, let us define the SBCI, x_i(t_n), of peer i as a monotonically increasing function of the bias ratio at timet_n-1. x_i(t_n) =R_i(t_n-1)/1 + R_i(t_n-1) =𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)/𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)+𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐱(𝐭_𝐧-1)If any peer i does not upload anything in the network at time t_n-1, then 𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)=0. But if itdownload something from the network at this time, then 𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐱(𝐭_𝐧-1) ≠ 0 only if 𝐱(𝐭_𝐧-1)≠ 0.Therefore, to make the denominator in (<ref>) nonzero for zero uploading and nonzero downloading, let us replace 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐱(𝐭_𝐧-1) byα 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐱(𝐭_𝐧-1) +(1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞. Here, α∈ (0, 1) is constant and 𝐞 is a columnvector with each element as 1. Hence, (<ref>) will be:x_i(t_n)=[𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)]/[𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)+α 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐱(𝐭_𝐧-1) + (1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞].SBCI in the above equation is estimated using the transactions, which are happening only at time t_n-1. Ifwe consider all the past transactions, then SBCI can be modified as:x_i(t_n) = (1-β_i(t_n-1))x_i(t_n-1) + β_i(t_n-1)[𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)]/[𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)+ α 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐱(𝐭_𝐧-1) + (1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞].If peer i does notparticipate in any transaction at time t_n-1, then x_i(t_n) should be x_i(t_n-1). Parameter β_i(t_n-1) can be decided by the fraction of transaction, which are happening at time, t_n-1, at node i, and can bedefined as:β_i(t_n-1)=0,if A_u_i=0. 𝐞_𝐢.[𝐒(𝐭_𝐧-1)+𝐒^𝐭𝐫(𝐭_𝐧-1)].𝐞/𝐞_𝐢.[𝐒_𝐜𝐨𝐦𝐩(𝐭_𝐧-1)+𝐒^𝐭𝐫_𝐜𝐨𝐦𝐩(𝐭_𝐧-1)].𝐞,otherwiseHere, A_u_i=𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1) + 𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐞. The 𝐒_𝐜𝐨𝐦𝐩(𝐭_𝐧-1) is a complete share matrix with its ij element as the amount of resources shared by peer i to peer j, till time t_n-1. To start the process of sharing, the SBCI vector can be initialized as, 𝐱(0)=α/(1+α)𝐞, later we will see that this choice of initialization will balance the upload and download amounts in the network. §.§ Justification For Design RulesIf any peer i, does not upload anything and only download the resources from the network at time t_n-1, then 𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)=0 and 𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐞≠ 0, hence, from(<ref>),x_i(t_n)=(1-β_i(t_n-1))x_i(t_n-1)Let it did not upload anything in the network till time t_n, andstarted downloading the resource first time at time t_m, then from (<ref>), β_i(t_m)=1, hencex_i(t_n)=(1-β_i(t_n-1))(1-β_i(t_n-2))...(1-β_i(t_m))x_i(t_m)=0.Therefore, if any peer i, only downloads from the network then its SBCIwill be zero.At timet_n-1, if any peer i uploads only to the free-riders, i.e., peers who only download without uploading anything in the network, then 𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)=0, if it does not download anything at this time, t_n-1, then 𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐞=0. Therefore, A_u_i=0, and hence from(<ref>), β_i(t_n-1)=0, and from (<ref>)x_i(t_n)=x_i(t_n-1) Therefore, uploading to the free-riders will not increase the SBCI.At time t_m-1, if any peer i, only uploads the resources in the network (at least one of the downloader should be other than free-rider) and does not download anything from itthen, 𝐞_𝐢.𝐒(𝐭_𝐦-1).𝐱(𝐭_𝐦-1)≠ 0 and α𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐦-1).𝐱(𝐭_𝐦-1) +(1-α)𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐦-1).𝐞=0. Hence from(<ref>),x_i(t_m)=(1-β_i(t_m-1)) x_i(t_m-1) + β_i(t_m-1)Let it is first time when the peer i makes any transactionin the network, then from (<ref>), β_i(t_m-1)=1. Hence, x_i(t_m)= 0.x_i(t_m-1) + β_i(t_m-1)=β_i(t_m-1)=1Now, at time t_m, if it does not participate in any transaction thenx_i(t_m+1)=x_i(t_m)=1if it uploads to only free-riders and does not download anythingthen againx_i(t_m+1)=x_i(t_m)=1if it uploads to at least one of the peer other than free-rider without downloading anything then,x_i(t_m+1)=(1-β_i(t_m)) x_i(t_m) + β_i(t_m) = (1-β_i(t_m)) 1 + β_i(t_m)=1.Hence, from mathematical induction, we can say that this is true for any n thus, x_i(t_n)=1Therefore, if any peer i, only uploads to the network (at least once to other than free-rider) then its SBCI will be 1.If any peer i, uploads the resources tonon-free-rider peer, attime t_n-1, then 𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1)≠ 0. If it does not download anything at thistime then, α𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐱(𝐭_𝐧-1) + (1-α)𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐞=0. Hence, from(<ref>), x_i(t_n)=(1-β_i(t_n-1)) x_i(t_n-1) + β_i(t_n-1)It is a convex combination of 1 and x_i(t_n-1) hence, x_i(t_n-1) < x_i(t_n) < 1 ∀β_i(t_n-1) ∈ (0,1)Therefore, uploading to the peer other than free-rider will always increase the SBCI.If any peer i, downloads the resource from the network at timet_n-1, then 𝐞_𝐢.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐞≠ 0, hence A_u ≠ 0, therefore, from(<ref>), β_i(t_n-1) > 0. If it does not upload anything in the network at this time, then 𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐱(𝐭_𝐧-1) = 0, hence from(<ref>)x_i(t_n)=(1-β_i(t_n-1)) x_i(t_n-1) + β_i(t_n-1).0 =(1-β_i(t_n-1)) x_i(t_n-1)hence,x_i(t_n) <x_i(t_n-1) Therefore,download will always decrease the SBCI.It can be concluded from the above discussion that high contributions will lead to high SBCI. Now, observing directlythe (<ref>), if peers will upload the resources tohigh SBCI peers then, they will earn more SBCI. Therefore, peers will be motivated to upload the resources to high contributing peers.It can also be observed from(<ref>) that peers will lose less SBCI, if they will download from a low SBCI peer. Therefore, peers will be motivated to download from low contributing peers.Let us understand the SBCI and its computation through an example. Let there be five peers A, B, C, D and E in a P2P network as shown in Fig. <ref>. If α= 0.9, then initial SBCI of all the peers will be α/(1+α)=0.4737. At time t=0, let they share the resources as shown in figure, i.e., S_12(0)=100, S_13(0)=200, S_25(0)=100, S_32(0)=100, S_34(0)=200, S_41(0)=100, S_51(0)=200, S_54(0)=100 and all others are zero. Since, it is initial step, hence, for all i, β_i(0)=1. Using(<ref>), SBCI vector at time t=1, can be calculated as, 𝐱(1)=[0.4737, 0.3103, 0.5745, 0.2308, 0.7297]^t. Now, let peer 1 needs the data amount of 100 units and all the four peers responded to his query, then peer 1 will select the peer with least SBCI as an uploader, in this case, peer 4 has least SBCI. After this transaction, let SBCI vector is updated at t=2. For t=1,S_41(1)=100 and all others are zero. Hence for this time, β_1(1)=1/7,β_2(1)=β_3(1)=β_5(1)=0 and β_4(1)=1/5. Hence, updated SBCIvector will be, 𝐱(2)=[0.4060, 0.3103, 0.5745, 0.3846, 0.7297]^t. §.§ Justification For Fairness At any timet_n-1, if upload and download at each peer is same and SBCI vector, 𝐱(𝐭_𝐧-1)=α/(1+α)𝐞,thenSBCI vector, 𝐱(𝐭_𝐧)=𝐱(𝐭_𝐧-1). Let upload and download for any peer i at time t_n-1 be T_i(t_n-1), then𝐞_𝐢.𝐒_𝐢(𝐭_𝐧-1).𝐞=𝐞_𝐢.𝐒^𝐭𝐫_𝐢(𝐭_𝐧-1).𝐞=T_i(t_n-1).Since, 𝐱(𝐭_𝐧-1)=α/(1+α)𝐞= a𝐞, here, a=α/(1+α), hence from (<ref>),x_i(t_n) =(1-β_i(t_n-1))a +β_i(t_n-1)[a𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞]/[a𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞+ α a 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞 + (1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞] =(1-β_i(t_n-1))a +β_i(t_n-1)[aT_i(t_n-1)]/[T_i(t_n-1)(a+α a + (1-α)] =(1-β_i(t_n-1))a + β_i(t_n-1)a/(a(1+α)+ (1-α))Put a=α/(1+α), hencex_i(t_n) =(1-β_i(t_n-1))a + β_i(t_n-1)a=a ∀ i If SBCI vector at any two successive time instances, t_n-1 and t_n, is same and lie on vector 𝐞, then upload and download attime t_n-1 will be same in each peer.Let 𝐱(𝐭_𝐧)=𝐱(𝐭_𝐧-1)=a𝐞, where a is any constant, then from(<ref>)a =(1-β_i(t_n-1))a +β_i(t_n-1)[a𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞]/[a𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞+ α a 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞 + (1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞].Manipulatingabove, we getaβ_i(t_n-1) =aβ_i(t_n-1)[𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞]/[a𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞+ α a 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞 + (1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞].For nonzero aβ_i(t_n-1),a𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞+(aα+1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞=𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞.Solving,(aα+1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞=(1-a)𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞∀ i Since, i=1,2,...,N, hence this set of N equations can be written in the form of matrix as follows,(aα+1-α)𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞=(1-a)𝐒(𝐭_𝐧-1).𝐞Pre-multiplying by 𝐞^𝐭𝐫 on both sides,(aα+1-α)𝐞^𝐭𝐫.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞=(1-a)𝐞^𝐭𝐫.𝐒(𝐭_𝐧-1).𝐞for any matrix, 𝐒(𝐭_𝐧-1), 𝐞^𝐭𝐫.𝐒(𝐭_𝐧-1).𝐞 will be the sum of all of its elements, hence 𝐞^𝐭𝐫.𝐒(𝐭_𝐧-1).𝐞=𝐞^𝐭𝐫.𝐒^𝐭𝐫(𝐭_𝐧-1).𝐞=T hence,(aα+1-α)T=(1-a)Tsince T ≠ 0 hence, a=α/1+αSubstituting the value of a in (<ref>)(α^2/1+α+1-α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞=(1-α/1+α)𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞or(1+α) 𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞=(1+α)𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞since α∈ (0,1), hence𝐞_𝐢.𝐒^𝐭𝐫 (𝐭_𝐧-1).𝐞=𝐞_𝐢.𝐒(𝐭_𝐧-1).𝐞∀ iHence, upload and download at time t_n-1 will be same in each peer i.§ ANALYSIS OF ALGORITHM §.§ Implementation in Distributed SystemSBCI of each peer can be calculated distributively as shown inAlgorithm <ref>. Each peer's SBCI can be calculatedandmanaged by some other peer in the network. We call it index manager and the peer whose SBCI is being calculated by this peer is called its daughter peer. The index manager peer can be located using distributed hash table (DHT) such as Chord <cit.>, CAN <cit.>, Pastry <cit.> and Tapestry <cit.>. Each peer i will send the values of resources uploaded and downloaded to and from other peer j to the index manager of peer j. An index manager peerwill collect the values of resources uploaded and downloaded by its daughter peer k, to other peers. Each index manager will locate the index manager of peer j and will receive the current SBCI,x_j(t_n-1), of peer j.Now each index manager possessesall the things to calculatethe SBCI of its daughter peer using (<ref>). The β_k(t_n-1) can be calculated using (<ref>). If A_u=0 then it is zero, otherwise it is just a ratio of the current transaction amount to the total transaction amount made by peer k, till time t_n-1. The total amount of transactions can be updated by adding the current amount of transaction with the previous total amount of transactions.§.§ Message Overhead, Storage Capacity and Time ComplexityIn this method, the SBCI is calculated directly while in other similar approaches <cit.>, <cit.> iterative calculations are required. Therefore, the total number of messages required to calculate the SBCI, in this method will be I_1 and I_2 times lesser than <cit.> and <cit.> respectively. Where I_1 and I_2is the number of iterationsrequired to converge the algorithm in <cit.> and <cit.> respectively.In this algorithm, index manager needs to store only two information about its daughter peer, i.e., current SBCI and total amount of transaction till t_n-1. While in <cit.>, <cit.>, all transaction history of its daughter peer, i.e., amount of transaction, ID of peer with whom it transacted and whether it was upload or download, are required to be stored. Therefore, the required amount of storage is reduced very much. Time complexity of algorithm for one update can be calculated directly from (<ref>). It will be O(N) per peer whichis same as in<cit.> and <cit.>. § TRANSACTION PROCEDURE FOR MAXIMUM EFFICIENCY §.§ Simple Procedure For Peer SelectionAll the peers are rational and aware of the fact that, if they will share their resources with peer having high SBCI, then their SBCI will be higher, and if they will download from a low SBCI peer then they will lose less SBCI. Therefore, the simple peer selection procedure for any peer i is to download from low SBCI peer and to upload to high SBCI peer, as far as possible, as shown in Algorithm <ref>. §.§ College Admission and The Stability of Marriage Based Approach For Peer Selection §.§.§ PreliminariesCollege Admission and the stability of Marriage is a well-known problem, introduced by Gale and Shapley <cit.>. In its most popular variants, there are two disjoint sets of cardinality, n. One set is representing the men and the other one is representing the women. Each person has a different order of preference for his or her marriage partner. There are several ways by which one-to-one pairing can be done. But a pairing is said to be stable, ifthere is no pair both of whom prefer each other to their actual partner.Gale and Shapley <cit.> provide the solution and the algorithm for stable pairing. They also proved that there always exists a stable match for such type of problem. In this algorithm, one of the group proposes his or her first preference, another group can reject the proposal or can keep it on hold until they get a better option. If any member from the proposing group get rejected, he or she tries on next preference. This process continues until proposing group is not rejected or rejected by all of his or her preferredpartners.If a proposal is given by men, then they get the better preferred partner as compared to any other stable pairing, hence it is called man optimal stable matching, the other way around women optimal stable matching. §.§.§ Application in Peer SelectionWe considered the situation where there are many uploaders and many downloaders for a resource. In order to earn the high SBCI, uploader would like to upload the resource to high SBCI peers, thus they have certain preferences for downloaders. On the other hand, for downloaders the resource and the SBCI both matter. Therefore, downloader may prefer the higher bandwidth uploader over low SBCI uploader. Thus, downloder have a different preference order for uploaders.In this situation, all uploaders and downloaders preference order can be collected at a certain node. We call it the resource manager node. This node can be found by hashing the resource identifier and finding corresponding root node in DHT network. On this node, the stable marriage algorithm can be used to pair the uploaders and downloaders. A message to eachpair will be sent after pairing, so that they can start the process of transaction. Detail of peer selection procedure in this situation is shown in Algorithm <ref>. § EXPERIMENTAL EVALUATIONAs in <cit.> and <cit.>, we used NetLogo 5.2 <cit.>, to evaluate the performance of our algorithm. NetLogo is a multiagent programmable modeling environment where we can model different agents and can ask them to perform the task in parallel and independently. It is written mostly in Scala, with some parts in Java.§.§ Simulation SetupWe simulated a typical P2P network with parameters and distributions taken from real world measurements as in <cit.>, <cit.>. In this network, peers can send a query for theresource. We assumed that ten percent of peers respond to this query. After selecting the source peer according to the procedure described in Section <ref>, resource is downloaded. We assumed the amount of resources requested by downloading peers varies randomly between 1 unitto 255 units. After downloading the resource, SBCI of peer is updated by an index manager using(<ref>). Any peer whose SBCI is less than the threshold value is rejected and cannot download theresources from the network. We assumed the threshold value of SBCI to be α/(1+α). The number of nodes in the network is taken as 1000, which is reasonable size. However, the number of nodes can be increased up to any number, but this will not affect the results. Because, evaluation metrics are normalized with respect to the number of nodes. The initial value of SBCI of all the peers are taken as α/(1+α). We conducted the experiment for α= 0.9,0.6 and 0.3. Percentage of free-riders were varied from 10% to 80%.The simulation is performed for three different peer distribution models, i.e., Simple, Adaptive and Extreme Model.In Simple Model, free-ridersvary from 10% - 70%. These free-riders do not share anything at any point of time in the simulation.In Adaptive Model, free-riders vary from 20% - 60%. Half of these free-riding peers do not share anything during the whole simulation. Remaining half behave as normal peers till midway of simulation, and thereafter convert themselves to free-riders.In Extreme Model, at the beginning of simulation, 10% peers are free-riders. After completion of every 12.5% of total transactions, 10% more peers convert themselves to free-riders. Thus, at the end of simulation, there will be 80% free-riders. The simulation was run upto 100000 transactions. §.§ Evaluation MetricsWe plotted the graph between the total upload and download amounts of each peer for all the models. To get the deeper picture, we also calculated the average absolute deviation (AAD) of upload to download ratio from one, in any model as:AAD = (1/N)∑_i=1^N|1-𝐞_𝐢.𝐒_𝐜𝐨𝐦𝐩(𝐭_𝐧).𝐞/𝐞_𝐢.𝐒^𝐭𝐫_𝐜𝐨𝐦𝐩(𝐭_𝐧).𝐞|. If upload amount for each peer is same as download amount, then the value of AAD in the network will be zero. The larger value of AAD implies, the larger difference between upload and download and thus, lesser fairness in the network.Network is said to be efficient if free-riders are not allowed to download anything, without affecting the transactions between non-free-rider peers. At any time, if SBCI of any cooperative peer is less than the threshold then it will also get rejected. This is not a desired state in the network. Therefore, we calculated the percentage of rejections among cooperative peers, i.e., cooperative peersrejecting the request of cooperative peers. For efficient algorithm, percentage of rejections must be minimum. For comparison,we also simulated the GC for its best case <cit.>, i.e., α=0.8 and β= 0.2. The parameters α and β are taken to be same as in <cit.>. For fair comparison, we kept the threshold value for peer selection as (2-α(1+β))/(2+α(1-β)). We kept maximum value ofthreshold in both GC as well as in SBCI. Rest of the settings for GC are same as in SBCI.§.§ Simulation Results of Simple Procedure For Peer SelectionWe conducted the simulation experiment for simple procedure of peer selection, as explained in Section <ref>. Bandwidth of all the peers is assumed to be same. For simple model, simulation results for SBCIare shown in Fig. <ref>. Corresponding AAD and percentage of rejections among cooperative peers are shown in Table <ref>. We can observe from thisfigure that in initial transactions, free-riders got some resources after that their SBCI become zero, which disqualify them in taking any resources from the network. For all other peers, upload to download ratio is very close to the reference line, thus algorithm is able to maintain the fairness in the network. We can observe from Table <ref> that the percentage of rejections among the cooperative peers are more for higher values of α. Because for higher values of α, threshold value of SBCI will be higher. But its impact on AAD is not very significant in this model.In Adaptive Model, free-riders earn the SBCI and thereafter use this SBCI to download maximum resources from the network.Simulation results for this model are shown in Fig. <ref>. Corresponding AAD and percentage of rejections among cooperative peers are shown in Table <ref>.We can observe from this figure thatfor higher α, algorithm performs better. For α=0.9, even in the presence of a large number of free-riders, the algorithm is able to balance the upload and download amount in the network. We can also observe from Table <ref> that for higher α the percentage of rejection among cooperative peers is higher but corresponding AAD is very less. Thus, impact of α is clearly evident. And finally, we conducted the simulation for SBCI in Extreme Model. Results for upload and download at each peer areshown in Fig. <ref>. Corresponding AAD and percentage of rejections among cooperative peers are shown in Table <ref>. We can observe from the figure that for α=0.9 the algorithm is able to balance the upload and download amount in the network. For α=0.9, at the cost of less than 2% of rejections among thecooperative peers, algorithm is able to maintain AAD as 0.211228. We also reported the simulation results of GC for all peer distribution models in Fig. <ref>. Corresponding AAD and percentage of rejections among cooperative peers are reported in Table <ref>. We can see from the figure that GC can also balance the upload and download amounts in each peer. In Adaptive Model and in Extreme Model GC can maintain better fairness compared to SBCI but the percentage of rejections among cooperative peers are higher in GC for all the models. Thus, it is less efficient compared to SBCI. §.§ Simulation Results of College Admission and The Stability of Marriage Based Approach For the Peer SelectionWe also conducted the experiment for college admission and the stability of marriage based approach for the peer selection. For simplicity, we considered only Simple model. Bandwidth of peers is assumed to be different, so that they can also include the bandwidth, as a criteria for peer selection. Selection of peer for downloading and uploading is done according to Algorithm <ref>. The stable match for uploader and downloader is made downloader optimal. To observe the impact of heterogeneity, we simulated the Simple Model for two different types of bandwidth distributions, i.e., type 1 and type 2.In type 1, half of the peers have bandwidth 10 units and the rest have 20 units. Simulation results for this type are shown in Fig. <ref>. Corresponding AAD and percentage of rejections among cooperative peers are shown in Table <ref>. We can see from the figure that upload and download amountincreases in each peer compared to simple procedure. Because each peer, who request for resources, is getting some option for downloading. Uploads and downloads in each peer are close to the reference line and corresponding AAD are lesser compared to simple procedure.Thus, the algorithm is able to balance the upload and download amount in each peer.In type 2, 10% of the peers have bandwidth 10 units, next 10% of the peers have bandwidth 20 units, next 10% of peers have bandwidth 30 units and so on. In this way, last 10% of peers will have bandwidth 100 units. Simulation results for this type are shown in Fig. <ref>. Corresponding AAD and percentage of rejections among cooperative peers are shown in Table <ref>. We can observe from this figure that upload and download amounts for most of the peers are far from reference line and corresponding AAD are also higher. Thus, the impact of heterogeneity is clearly evident. It also supports the argument that if we will select the source peer according to bandwidth rather than SBCI, we will loose the fairness in the network. § CONCLUSIONIn this work, we proposed a new algorithm to make the P2P network fair and efficient. The algorithmranks the peers based on their simplified biased contribution index (SBCI) which can vary from 0 to 1. Estimation of SBCI is based on two factors, the resources contributed by the peer and the SBCI of peer with whom it is transacting.We propose the design rules to make the network fair and efficient. With the help of mathematical justification, we have shown that our algorithmcan fulfill all the design objectives and is able to maintain the fairness in the network. This algorithm can be implemented in the truly distributed fashion. Since,no iterative calculation is needed, it can be implemented with lesser message overhead and storage capacity. We proposed two different peer selection approaches, namely simple procedure and college admission and the stability of marriage based approach. Simulation results show that the algorithm is able to suppress the free-riders in highly free-riding environment. The algorithm is also able to suppress the dynamic free-riders, i.e., those who change their behavior dynamically.In future, we would like to implement this mechanism in unstructured P2P network. 1 survey2_1 J. S. Otto, M.A. Sanchez, D. R. Choffnes, F.E. Bustamante, and G. Siganos, “On Blind Mice and the Elephant - Understanding the Network Impact of a Large Distributed System,” Proc. of ACM SIGCOMM, conference 2011. pp. 110- 121, 2011.survey2_2 Sandvine, Waterloo, Canada, “Sandvine June 2016 global Internet phenomena report,” 2016 [Online]. Available: https://www.sandvine.com/trends/global-internet-phenomena/freeride1 M. Karakaya, I. Korpeoglu, and O. Ulusoy, “Free riding in peer-topeer networks,” IEEE Internet Comput., vol. 13, no. 2, pp. 92–98, March-April 2009. global H. Nishida and T.  Nguyen,"A Global Contribution Approach to Maintain Fairness in P2P networks"IEEE Tran. on Parallel and Distributed system, vol.21, no.6, June 2010.bci S. K. Awasthi and Y. N. Singh,"Biased Contribution Index: A Simpler Mechanism to Maintain Fairness in Peer to Peer Networks"https://arxiv.org/pdf/1606.00717.pdf, June 2016.robust M. Feldman, K. Lai, I. Stoica, and J. Chuang, “Robust Incentive Techniques for Peer-to-Peer Networks," Proc. Fifth ACM Conf. Electronic Commerce, pp. 102-111, May 2004.titP. Garbacki, D. H. J.  Epema and M. Steen, "An Amortized Tit-For-Tat Protocol for Exchanging Bandwidth Instead of Content in P2P Networks,"First Int’l Conf. Self-Adaptive and Self-Organizing Systems, pp. 119-228, 2007.mtit Q. Lian, Y. Peng, M. Yang, Z. Zhang, Y. Dai, and X. Li, “Robust Incentives via Multi-Level Tit-for-Tat,” Concurrency and Computation: Practice & Experience, vol. 20, pp. 167-178, 2008.give J. J. D. Mol, J. A. Pouwelse, D. H. J. Epema, and H. J. Sips, “Give-to-Get: Free-Riding Resilient Video-on-Demand in P2P Systems,”Proc. Multimedia Computing and Networking, pp. 681804-1-681804-8,2008.sync V. S. Borkar, R. Makhijani and R. Sundaresan, "Asynchronous Gossip for Averaging and Spectral Ranking," IEEE Journal Of Selected Topics in Signal Processing, vol. 8, no. 4, pp. 703-716, August 2016. gnutela2002 S. Saroiu, P. K. Gummadi, and S. D. Gribble. "A Measurement Study of Peer-to-Peer File Sharing Systems."In Proceedings of Multimedia Computing and Networking 2002 (MMCN ’02), San Jose, CA, USA, January 2002. whitewashing M. Feldman, C. Papadimitriou, J. Chuang, and I. Stoica, "Free-Riding and Whitewashing in Peer-to-Peer Systems," Proc. ACM SIGCOMM Workshop Practice and Theory of Incentives in Networked Systems (PINS), pp. 228-236, August 2004.ccom2007 K. Eger and U. Killat, "Fair Resource Allocation in Peer-to-Peer Networks (Extended Version),"Computer Comm., vol. 30, no. 16, pp. 3046-3054, November 2007.tom9 H. Park and M. van der Schaar, "A Framework for Foresighted Resource Reciprocation in P2P Networks," IEEE Trans. Multimedia, vol. 11, no. 1, pp. 101-116, January 2009. online R. Izhak-Ratzin, H. Park and M. van der Schaar,"Online Learning in BitTorrent Systems," IEEE Tran. on Parallel and Distributed system,vol. 23, no. 12, pp. 2280-2288, December 2012.ton12 A. Sherman, J. Nieh, and C. Stein,"FairTorrent: A Deficit-Based Distributed Algorithm to Ensure Fairness in Peer-to-Peer Systems" IEEE/ACM Trans. Networking, vol. 20, no. 5, pp.1361-1374, October 2012. jstsp2010 J. Park and M. van der Schaar,"A Game Theoretic Analysis of Incentives in Content Production and Sharing Over Peer-to-Peer Networks," IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 4, August 2010.evalgame2010 Y. Chen, B. Wang, W. Sabrina Lin,Y. Wu, and K. J. Ray Liu,"Cooperative Peer-to-Peer Streaming: An Evolutionary Game-Theoretic Approach," IEEE Trans. on Circuit and Systems for Video Technology, vol. 20, no. 10 , pp. 1346-1357, October 2010.tongame2006 R. T. B. Ma, S. C. M. Lee, J. C. S. Lui and D. K. Y. Yau, "Incentive and Service Differentiation in P2P Networks: A Game Theoretic Approach,"IEEE/ACM Trans. Networking, vol. 14, no. 5, pp. 978-991, October 2006. torrent B. Cohen, "Incentives build robustness in BitTorrent," presented at the 1st Workshop Econ. Peer-to-Peer Syst., Jun. 2003. auction D. Levin, K. LaCurts, N. Spring, and B. Bhattacharjee, “BitTorrent is an auction: Analyzing and improving BitTorrent’s incentives,” in Proceedings of theACM SIGCOMM, pp. 243–254, August2008. prop_shareF. Wu and L. Zhang. Proportional response dynamics leads to market equilibrium. In ACM STOC, 2007. abs S. K. Awasthi and Y. N. Singh, "Absolute Trust:Algorithm for Aggregation of Trust in Peer to Peer Network", http://arxiv.org/abs/1601.01419eigen S. D. Kamvar, M. T. Schlosser and H. Garcia-Molina, "The eigentrust algorithm for reputation management in P2P networks,"Proc. of the 12th international conference on World Wide Web, ser. WWW ’03. New York, USA: ACM, pp. 640–651, 2003. sat S. K. Awasthi and Y. N. Singh, "Generalized Analysis of Convergence of Absolute Trust in Peer to Peer Networks," IEEE Communication Letters, vol. 20, no. 7,July 2016. sortAhmet Burak Can and Bharat Bhargava, "SORT: A Self-ORganizing Trust Model for Peer-to-Peer Systems," IEEE Trans. On Dependable and Secure Computing, vol. 10, no. 1, pp. 14-27, January/February 2013. chordI. Stoica, R.  Morris, D.  Nowell, D.  Karger, M.  Kaashoek, F.  Dabek and H.  Balakrishnan, "Chord: A Scalable Peer-to-Peer Lookup Protocol for Internet Applications," ACM SIGCOMM Computer Comm. Rev., vol. 31, no. 4, pp. 149-160, 2001.can S. Ratnasamy, P. Francis, M. Handley, R. Karp and S. Shenker, "A scalable content-addressable network,"Proc. of ACM SIGCOMM '01 conference on Applications, technologies, architectures, and protocols for computer communications, pp. 161-172, August 2001.pastry A. Rowstron and P. Druschel, "Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems," Proc. of the Middleware'01 IFIP/ACM International Conference on Distributed Systems Platforms Heidelberg, pp. 329-350, 2001.tapestry B. Y. Zhao, L. Huang, J. Stribling, S. C. Rhea, A. D. Joseph and J.  D.  Kubiatowicz, "Tapestry: A resilient global-scale overlay for service deployment," IEEE Journal on Selected Areas in Communications, vol. 22, no. 1, pp.41–53, January 2004.stable D. Gale and L. S. Shapley, "College admissions and the stability of marriage," The American Mathematical Monthly, vol. 69, no. 1, pp.9-15, January 1962.sfa Xiaoyong Li, Feng Zhou and Xudong Yang, "Scalable Feedback Aggregating (SFA) Overlay for Large-Scale P2P Trust Management," IEEE Trans. Parallel and Distributed Systems, vol. 23, No. 10, pp.1944-1957, October 2012.eval Z. Liang and W. Shi, "Analysis of ratings on trust inference in open environments," Performance Evaluation, vol. 65, no. 2, pp. 99-128, 2008.netlogo Uri Wilensky, https://ccl.northwestern.edu/netlogo/, 2015.gnutela Matei Ripeanu, Adriana Iamnitchi, and Ian Foster, "Mapping the Gnutella Network: Properties of Large-Scale Peer-to-Peer Systems and Implications for System Design," IEEE Internet Computing Journal special issue on peer-to-peer networking, vol. 6(1), 50-57, January/February 2002.ton13 R. Cuevas, M. Kryczka, A. Cuevas, S. Kaune, C. Guerrero, and R. Rejaie. "Unveiling the Incentives for Content Publishing in Popular BitTorrent Portals," IEEE/ACM Trans. Networking, vol. 21, no. 5, pp.1421-1435, October 2013.matrix Denis Serre, Matrices Theory and Applications, Springer-Verlag New York, Inc., 2002.He was born in Uttarkashi, India. He is currently pursuing Ph.D in the Department of Electrical Engineering atIIT, Kanpur. His research interests include Peer-to-Peer Networks, Wireless Sensor Networks, Complex Networks, Social Networks, Solution of non-linear equations, Application of Linear Algebra and Game theory in Networks. He was born in Delhi, India. He was awarded Ph.D for his work on optical amplifier placement problem in all-optical broadcast networks in 1997 by IIT Delhi. In July 1997, he joined EE Department, IIT Kanpur. He was given AICTE young teacher award in 2003. Currently, he is working as professor. He is fellow of IETE, senior member of IEEE and ICEIT, and member ISOC. He has interests in telecommunications' networks specially optical networks, switching systems, mobile communications, distributed software system design. He has supervised 10 Ph.D and more than 125 M.Tech theses so far. He has filed three patents for switch architectures, and have published many journal and conference research publications. He has also written lecture notes on Digital Switching which are distributed as open access content through content repository of IIT Kanpur. He has also been involved in opensource software development. He has started Brihaspati (brihaspati.sourceforge.net) initiative, an opesource learning management system, BrihaspatiSync – a live lecture delivery system over Internet, BGAS – general accounting systems for academicinstitutes.
http://arxiv.org/abs/1702.07992v1
{ "authors": [ "Sateesh Kumar Awasthi", "Yatindra Nath Singh" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170226062326", "title": "Simplified Biased Contribution Index (SBCI): A Mechanism to Make P2P Network Fair and Efficient for Resource Sharing" }
Department of Physics, Southern University of Science and Technology of China, Shenzhen 518055, China Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong HKUST Shenzhen Research Institute, Shenzhen 518057, China Magnon-photon coupling in antiferromagnets has many attractive features that do not exist in ferro- or ferrimagnets. We show quantum-mechanically that, in the absence of an external field, one of the two degenerated spin wave bands couples with photons while the other does not. The photon mode anticrosses with the coupled spin waves when their frequencies are close to each other. Similar to its ferromagnetic counterpart, the magnon-photon coupling strength is proportional to the square root of number of spins √(N) in antiferromagnets. An external field removes the spin wave degeneracy and both spin wave bands couple to the photons, resulting in two anticrossings between the magnons and photons. Two transmission peaks were observed near the anticrossing frequency. The maximum damping that allows clear discrimination of the two transmission peaks is proportional to √(N) and it's well below the damping of antiferromagnetic insulators. Therefore the strong magnon-photon coupling can be realized in antiferromagnets and the coherent information transfer between the photons and magnons is possible.Magnon-photon coupling in antiferromagnets X. R. Wang December 30, 2023 ========================================== Information transfer between different information carriers is an important topic in information science and technology. This transfer is possible when strong coupling exists among different information carriers. Strong coupling has already been realized between photons and various excitations of condensed matter including electrons, phonons, <cit.> plasmons <cit.>, superconductor qubits <cit.>, excitons in a quantum well <cit.> and magnons <cit.>. Among all of the excitations, magnons, which are excitations of the magnetization of a magnet, are promising information carriers in spintronics because of their low energy consumption, long coherent distance/time, nanometer-scale wavelength, and useful information processing frequency ranging from gigahertz (GHz) to terahertz (THz). Furthermore, magnons can also be a control knob of magnetization dynamics <cit.>, and magnon bands of a magnet can be well controlled by either magnetic field or electric current. The electric field 𝐄 and magnetic inductance 𝐁 in a microcavity of volume V can be sufficient strong even with only one or a few photons of frequency ν (|𝐄|, |𝐁| ∝√(hν/V)). Therefore, the coupling between the microcavity photons and the magnons of nanomagnets have received particular attention in recent years. Moreover, similar to the cavity quantum electrodynamics <cit.> which deals with coupling between photons and atoms in a cavity and provides a useful platform for studying quantum phenomenon and for various applications in micro laser and photon bandgap structure, cavity magnonics is also a promising arena for investigating magnons at the quantum level and for manipulating information transfer between single photon and single magnon.The theoretical demonstration of a possible coupling of a ferro-/ferrimagnet to light was provided in 2013 <cit.>. The coupling strength is proportional to the square root of the number of spins √(N) and the coupling energy could be as big as ∼ 100μ eV in a cavity of ∼ 1 mm and resonance frequency ∼ 200 GHz. The prediction was experimentally confirmed by placing a yttrium iron garnet (YIG) particle in a microwave cavity of high quality factor. <cit.> Many applications based on these results have been proposed, including the generation and characterization of squeezed states through the interaction between magnons and superconducting qubits via microwave cavity photons <cit.> and coherent information transfer between magnons and photons <cit.>. The information can be transmitted and read out electrically in the hybrid architecture under a strong magnon-photon coupling. <cit.>Antiferromagnets (AFM) have many useful properties in comparison with ferromagnetic materials such as better stability against the external field perturbations and negligible cross talking with the neighboring AFM elements because of the absence of stray fields. The AFM dynamics is typically of the order of THz, much faster than the order of GHz for ferromagnets. Because of these superb properties, various aspects of antiferromagnetic spintronics have attracted significant interests in the last few years including domain wall motion, skyrmions, magnetoresistence, magnetic switching, spin pumping, spin current transport and so on. <cit.> However, only few works based on the classical electrodynamics were reported on the magnon-photon coupling <cit.> in AFM so far. In order to have a better understanding of the magnon-photon coupling in AFM, we would like to study the issue at the quantum level. In this letter, we demonstrate quantum mechanically the existence of magnon-polariton in an AFM and show that there exists a dark mode and a bright mode in the strong coupling regime. Antiferromagnetic insulators with low damping are promising candidates to realize strong magnon-photon coupling. We consider a two sublattice antiferromagnet whose spins on the sublattices (a and b) align in the opposite directions along ± z-axis as shown in Fig. <ref>. The Hamiltonian of the AFM coupled with light through its magnetic field isH=H_AFM + H_ph + H_int,H_AFM=J ∑_l,δ ( 𝐒_l^a ·𝐒_l+δ^b + 𝐒_l^b ·𝐒_l+δ^a )-∑_l (𝐇_0 + 𝐇_a) ·𝐒_l^a - ∑_l (𝐇_0 - 𝐇_a) ·𝐒_l + δ^bH_ph=1/2∫ ( ϵ_0 𝐄^2 + 1/μ_0𝐁^2) dxdydzH_int=-∑_l,α=a,b𝐒_l^α·𝐇_fwhere H_AFM, H_ph, H_int are respectively the Hamiltonian for AFM, photon and their interaction. J (>0) is the exchange constant, 𝐒_l^a and 𝐒_l^b are the spins on sites l of sublattices a and b respectively. δ denotes the displacement of two nearest spins. 𝐇_0 is the external magnetic field and 𝐇_a is the anisotropy field. 𝐄 and 𝐁 are the electric field and magnetic inductance of the electromagnetic (EM) wave and 𝐇_f is the corresponding magnetic field, ϵ_0 and μ_0 are vacuum permittivity and susceptibility, respectively. Using the Holstein-Primakoff transformation <cit.>, H_AFM in the momentum space can be written asH_AFM =H_ex∑_q[ γ_q (a_q^† b_q^† + a_q b_q) + (a_q^† a_q +b_q^† b_q)] +∑_q [(H_a+H_0) a_q^† a_q+(H_a- H_0)b_q^† b_q],where H_ex = 2JSz, z and γ_q are respectively the coordination number and the structure factor of the lattice. The EM wave could be quantized through the standard procedures H_ph = ħ∑_q ω_q( c_q^† c_q + 1/2 ) and the interaction term is H_int = ħ∑_q g_c( c_qa_q + c_q^† a_q^ †+ c_q b_q^† + c_q^† b_q) for a circularly polarized wave, where g_c = √(μ_0 ω_q S N/ 2 ħ V), ħ, N, V and ω_q are respectively the Planck constant, the number of spins on each sublattice, the volume of the cavity, and the photon frequency. The photon dispersion relation is linear ω_q=c|𝐪|, where c is the speed of light. a_q^+, a_q, b_q^+, b_q and c_q^+, c_q are creation and annihilation operators of magnons and photons, respectively, and they satisfy the commutation relations for bosons.The Hamiltonian (<ref>) does not conserve the magnon number, and can be diagonalized by the Bogoliubov transformation,a_q =u_q α_q +v_qβ_q^†, b_q=u_q β_q+ v_q α_q^†,where u_q=√((Δ_q -1)/2), v_q =√((Δ_q +1)/2), and Δ_q = 1/√(1-(H_exγ_q/(H_ex+H_a))^2). In terms of the boson oeprators α_q, α_q^†, β_q, β_q^†, H_AFM readsH_AFM = ∑_q ħω_q^- α_q^†α_q+ ħω_q^+β_q^†β_q,whereω_q^± = ±γ H + γ√( H^2_sp + H_ex^2 (1-γ_q^2))is magnon dispersion relation of an AFM. γ is gyromagnetic ratio and H_sp=√( H_a(H_a + 2H_ex)) is the spin-flop transition field. Under the transformation of Eq. (<ref>), the interaction Hamiltonian isH_int = ħ∑_q g_c(u_q+v_q) (c_q α_q + c_q^†α_q^† + c_qβ_q^†+ c_q^†β_q). Because the slope of the photon dispersion relation is much more steep than that of the magnon, the photon can only interact strongly with the magnons around the Gamma point (q=0). For simplicity, we set q=0 and the sum in the H_AFM is removed. To obtain the eigen-modes of the coupled system, we define Ψ = (α_q, β_q^†, c_q^†)^† and write the Hamiltonian in the matrix form H=ħΨ ^†𝐌Ψ with 𝐌= ( [ ω^- 0 λ/2; 0 ω^+ λ/2; λ/2 λ/2 ω_c ] ).where λ=2g_c(u+v)=2g_c(u_q=0 + v_q=0),ω^± = ω_q=0^± and ω_c is the photon frequency.The eigen-equation of 𝐌 reads4 ω^3 - 4 (ω^+ + ω^- + ω_c) ω^2 + λ^2 (ω^+ + ω^- )-4ω^+ ω^- ω_c+2 (-λ^2 + 2ω^+ ω^- + 2ω^+ ω_c + 2ω^- ω_c)ω =0. In the absence of an external field, this cubic equation has analytical solutionsω_1,2= 1/2 [ ω_r + ω_c ±√( (ω_r - ω_c)^2 + 2λ^2) ], ω_3 =ω_r=γ H_sp.The typical dispersion relation is shown in Fig. <ref>a.Obviously, one magnon band is left unchanged (ω_3, red line) and the other band is coupled with the photon mode and anticrosses with each other (ω_1 and ω_2, blue and yellow lines). Therefore, one of the degenerated magnon band at H=0 is a dark mode that doesn't interact with the photons while the other is a bright mode and interacts with the photon. For very small and very large wavevector q, the linear dispersion are mainly from the photons (dashed line). Only near the wavevector q = ω_r/c, where the photon frequency equals magnon frequency, the anticrossing feature becomes pronounced.When the external field is non-zero, the double degeneracy of the magnon modes are removed with an energy split proportional to 2H. Both magnon bands are coupled with the cavity photon, but the two anticrossings appear at two different q, which is determined by q = ω^± /c, as shown in Fig. <ref>b. On the other hand, for a fixed photon frequency of ω_c, strong coupling occurs by adjusting the external field H so that ω_± = ω_c. Depending on the magnitude of the photon frequency, strong coupling can be with either the ascending band ω_+ or descending band ω_-, as shown in Fig. <ref>c and <ref>d, respectively. Furthermore, to achieve a reliable information transfer between the magnons and photons, it's important to know the coupling strength between the magnons and photons. According to Eq. (<ref>), the frequency split of the two anticrossing modes at the resonance is Δω = √(2)λ = 2 √(2)g_c(u+v), which is proportional to the coupling strength g_c (u+v). Thus we will express the coupling strength by Δω below. The coupling strength as a function of spin numbers N is shown in Fig. <ref>e. The coupling strength increases linearly with the square root of N. For N=2.0 × 10^16,H=0.1H_sp, the coupling is 11.3μ eV. Figure <ref>f shows the field-dependence of the coupling strength. The coupling strength first increases sharply with the field and then approaches a constant value. The transmission of an incident EM wave is often measured in the experiments. As argued in the previous publications <cit.>, the transmission can be viewed as a scattering process, which is well described by the Green function of a magnet-light system. Suppose the eigenvectors of eigenvalues ω_1,2,3 are |1>, |2>, |3>, respectively. Then the Green function in the diagonal basis isG=∑_k=1,2,3|k> <k | /ω -ω_k +iϵwhere ϵ is an arbitrary small positive number. The transmission amplitude is the imaginary part of the Green function,i.e. 𝐓 (ω) ∝ - Im (𝐆 (ω)). The transmission of an incident wave | φ_0> (eigen-mode of c_q^† c_q) is T =< φ_0| 𝐓 | φ_0>. Figure <ref>a is the transmission near the photon frequency for H=0.15H_sp and N=1.56 × 10^7. Two transmission peaks center at the calculated eigen-frequency (dashed lines), which demonstrates the strong magnon-photon coupling. The δ-function like transmission peaks are due to the absence of the damping. In realistic case, the damping will broaden the peaks of the Lorentzian curve. If the damping is large enough, the two Lorentzian peaks will merge to a single peak and then the coupling modes cannot be identified.To quantitatively see the influence of damping on the transmission spectrum, we first replace ω_r by ω_r - iαω_r in the matrix 𝐌, where α is the strength of damping, then we calculate the complex eigenvalues and eigenvectors of 𝐌 and use them to compute the imaginary part of the Green function (transmission amplitude). Figure <ref>b-e is the frequency-dependence of the transmission for α increasing from 0.001 to 0.02. Indeed, (transmission) peak width increases, and peak height decreases with the increase of α. For the parameters used in our calculations, two peaks become indistinguishable for damping larger than 0.02. As the number of spins N increases, the coupling strength between magnons and photons increases, then the magnon-polariton is more stable to resist the intrinsic damping of magnons. Figure <ref>f shows the maximum damping α_m that allows clear identification of the two coupled modes as a function of √(N) for H=0.15H_sp, which verifies the argument. Quantitatively, the linewidth of the absorption curve should be c_0 αω_c, then the maximum damping could be derived as c_0 α_m ω_c = Δω i.e. α_m = Δω/(c_0 ω_c). The numerical data could be perfectly described by using c_0=√(2), as shown by the orange line in Fig. <ref>f.Our results suggest that the key to realize the strong magnon-photon coupling in AFMs is to use low damping materials. The intrinsic damping of an antiferromagnetic metal is of the order of 0.5 according to the first principles calculation, which will be published elsewhere.Hence antiferromagnetic metals are not favorable for realizing strong magnon-photon coupling. The damping of antiferromagnetic insulators such as NiO can be as low as 2.1 × 10^-4 <cit.>, comparable to that of YIG. Thus, the strong coupling can be realized in low damping antiferromagnetic insulators according to our results. In terms of the detection of the coupling signal, we can either measure the transmission spectrum or use electric detection method to measure the voltage signal of a hybridized structure. For the ferromagnetic case, the coupling strength between magnon and microwave photons have been measured through the electrical detection of spin pumping from the ferromagnetic layer <cit.>. It was recently reported that spin pumping exists also at the interface of AFM/normal metal <cit.>. In fact, AFM layer may even enhance the spin pumping. Thus, electrical detection of the coupling signal in AFM is also possible in the hybridized structures. Furthermore, magnon modes in an AFM has already been experimentally excited by using sub-THz technology <cit.>.In conclusions, we have quantum-mechanically investigated the magnon-photon coupling in an antiferromagnet. The coupling strength is proportional to the square root of number of spins and can be order of several μ eV to tens of μ eV, which could be observed in low damping AFM insulators. In the absence of an external field, only one magnon band is coupled with the cavity photon and anticrosses with each other near the cavity frequency while the other does not. External fields remove the double degeneracy in magnon bands and both magnon bands couple to the cavity photon, resulting in two anticrossings. HYY would like to thank Ke Xia, Zhe Yuan, Grigoryan Vahram and Meng Xiao for helpful discussions. XRW acknowledges the support from National Natural Science Foundation of China (Grant No. 11374249) and Hong Kong RGC (Grant No. 163011151 and 16301816). Tolpygo1950 K. B. Tolpygo, Zh. Eksp. Teor. Fiz. 20, 497 (1950).Kun1951 K. Huang, Nature 167, 779 (1951).Ritchie1957 R. H. Ritchie, Phys. Rev. 106, 874 (1957).Barnes2003 W. L. Barnes, A. Dereux, and T. W. Ebbesen, Nature 424, 824 (2003).Berini2011 P. Berini and I. De Leon, Nat. Photon. 6, 16 (2011).Wallraff2004 A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S. Huang, J. Majer, S. Kumar, S. M. Girvin, and J. Schoekopf, Nature 431, 162 (2004).Dufferwiel2015 S. Dufferwiel, S. Schwarz, F. Withers, A. A. P. Trichet, F. Li, M. Sich, O. Del Pozo-Zamudio, C. Clark, A. Nalitov, D. D. Solnyshkov, G. Malpuech, K. S. Novoselov, J. M. Smith, M.S. Skolnick, D. N. Krizhanovskii, and A. I. Tartakovskii, Nat. Commun. 6, 8579 (2015).Soykal2010 Ö. O. Soykal and M. E. Flatté, Phys. Rev. Lett. 104, 077202 (2010).Huebl2013 H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, Phys. Rev. Lett. 111, 127003 (2013).Cao2015 Y. Cao, P. Yan, H. Huebl, S. T. B. Goennenwein, and G. E. W. Bauer, Phys. Rev. B 91, 094423 (2015).yanpeng2011 P. Yan, X. S. Wang, and X. R. Wang, Phys. Rev. Lett. 107, 177207 (2011).xiansi2012 X. S. Wang, P. Yan, Y. H. Shen, G. E. W. Bauer, and X. R. Wang, Phys. Rev. Lett. 109, 167209 (2012).hubin2013 B. Hu and X. R. Wang, Phys. Rev. Lett. 111, 027205 (2013).Walther2006 H. Walther, B. T. H. Varcoe, B. Englert, and T. Becker, Rep. Prog. Phys. 69, 1325 (2006).Tabuchi2014 Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Phys. Rev. Lett. 113, 083603 (2014).Zhang2014 X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Phys. Rev. Lett. 113, 156401 (2014).Bai2015 L. Bai, M. Harder, Y. P. Chen, X. Fan, J. Q. Xiao, and C. -M. Hu, Phys. Rev. Lett. 114, 227201 (2015).Flaig2016 H. Maier-Flaig, M. Harder, R. Gross, H. Huebl, S. T. B. Geennenwein, arXiv:1601.05681v1Jungwirth2016 T. Jungwirth, X. Marti, P. Wadley, and J. Wunderlich, Nat. Nanotech. 11, 231 (2016).Manohar1972 C. Manohar and G. Venkataraman, Phys. Rev. B 5, 1993 (1972).Bose1975 S. M. Bose, E-Ni. Foo, and M. A. Zuniga, Phys. Rev. B 12, 3855 (1975).Holstein1940 T. Holstein and H. Primakoff, Phys. Rev. 58, 1098 (1940).Harder2016 M. Harder, L. Bai, C. Match, J. Sirker, and C. -M. Hu, arXiv:1601.06049v2.Kampfrath2011 T. Kampfrath, A. Sell, G. Klatt, A. Pashkin, S. Mährlein, T. Dekorsy, M. Wolf, M. Fiebig, A. Leitenstorfer, and R. Huber, Nat. Photon. 5, 31 (2011).Cheng2014 R. Cheng, J. Xiao, Q. Liu, and A. Brataas, Phys. Rev. Lett. 113 057601 (2014).Caspers2016 C. Caspers, V. P. Gandhi, A. Magrez, E. de Rijk, and Jean-Philippe Ansermet, Appl. Phys. Lett. 108, 241109 (2016).
http://arxiv.org/abs/1702.07977v1
{ "authors": [ "H. Y. Yuan", "X. R. Wang" ], "categories": [ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mes-hall", "published": "20170226023912", "title": "Magnon-photon coupling in antiferromagnets" }
[cor1]Corresponding Author. Address: Waltherstr. 23, 81369 Mnchen, Germany; Email: christian.wachinger@med.uni-muenchen.de ^aDepartment of Child and Adolescent Psychiatry, Psychosomatic and Psychotherapy, Ludwig-Maximilian-University, Munich, Germany ^bAthinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA ^cComputer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA ^dSAP SE, Berlin, GermanyWe introduce DeepNAT, a 3D Deep convolutional neural network for the automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance images.DeepNAT is an end-to-end learning-based approach to brain segmentation that jointly learns an abstract feature representation and a multi-class classification.We propose a 3D patch-based approach, where we do not only predict the center voxel of the patch but also neighbors, which is formulated as multi-task learning.To address a class imbalance problem, we arrange two networks hierarchically, where the first one separates foreground from background, and the second one identifies 25 brain structures on the foreground.Since patches lack spatial context, we augment them with coordinates.To this end, we introduce a novel intrinsic parameterization of the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator.As network architecture, we use three convolutional layers with pooling, batch normalization, andnon-linearities, followed by fully connected layers with dropout.The final segmentation is inferred from the probabilistic output of the network with a 3D fully connected conditional random field, which ensures label agreement between close voxels.The roughly 2.7 million parameters in the network are learned with stochastic gradient descent.Our results show that DeepNAT compares favorably to state-of-the-art methods.Finally, the purely learning-based method may have a high potential for the adaptation to young, old, or diseased brains by fine-tuning the pre-trained network with a small training sample on the target application, where the availability of larger datasets with manual annotations may boost the overall segmentation accuracy in the future.Brain segmentation deep learning convolutional neural networks multi-task learning conditional random field§ INTRODUCTIONThe accurate segmentation of neuroanatomy forms the basis for volume, thickness, and shape measurementsfrom magnetic resonance imaging (MRI).Such quantitative measurements are widely studied in neuroscience to track structural brain changes associated with aging and disease.Additionally, they provide a vast phenotypic characterization of an individual and can serve as endophenotypes for disease. Since the manual segmentation of brain MRI scans is time consuming, computational tools have been developed to automatically reconstruct neuroanatomy, which is particularly important for the vastly growing number of large-scale brain studies. One of the most commonly used software tools for whole brain segmentation is FreeSurfer <cit.>, which applies an atlas-based segmentation strategy with deformable registration.This seminal work encouraged research in atlas-based segmentation, with a focus on multi-atlas techniques and label fusion strategies<cit.>.A potential drawback of atlas-based segmentation approaches is the computation of a deformation field between subjects, which involves regularization constraints to solve an ill-conditioned optimization problem. Typically smoothness constraints are enforced, which may impede the correct spatial alignment of inter-subject scans.Interestingly, the deformation field is only used for propagating the segmentation and not of interest by itself. Learning-based approaches without deformable registration present an alternative avenue for image segmentation, where the atlas with manual segmentations serves as training set for predicting the segmentation of a new scan.Directly predicting the segmentation of the entire image is challenging because of the high dimensionality, i.e., the number of voxels,and the limited number of training scans with manual segmentations.Instead, the problem is reduced to predicting the label for small image regions, known as patches.Good segmentation performance was reported for patch-based approaches following a non-local means strategy <cit.>, which is similar to a nearest neighbor search in patch space. Alternative patch classification schemes have been proposed, e.g., random forests <cit.>. A potentially limiting factor of patch-based approaches is that they operate on image intensities, where previous results in pattern recognition suggest that it is less the classifier but rather the representation that primarily impacts the performance of a predictive model <cit.>. In a recent study, a wide range of image features for image segmentation was compared and a significant improvement for augmenting intensity patches with features was measured <cit.>. While image features improve the segmentation, they are handcrafted and may therefore not be optimal for the application. In contrast,neural networks autonomously learn representations that are optimal for the given task, without the need for manually defining features. Neural nets therefore break the common paradigm of patch-based segmentation, which separates feature extraction and classification, and replaces it with an end-to-end learning framework that starts with the image data and predicts the anatomical label.Deep convolutional neural networks (DCNN) have had ample success in computer vision <cit.>and increasinglyin medical imaging <cit.>.Applications in computer vision are typically on 2D images, where 2D+t DCNNs were proposedfor human action recognition <cit.>. In medical applications, 2.5D techniques have been proposed <cit.>.The three orthogonal planes are integrated in existing DCNNs frameworks by setting the planes in the RGB channels.Difficulties in training 3D DCNNs have been reported <cit.>, due to the increase in complexity by adding an additional dimension.Yet, several articles describe successful applications of 3D networks on medical images.<cit.> propose a 3D deep convolutional encoder for lesion segmentation.<cit.> use a multi-layer perceptron for landmark detection.Most related to our work is the application of 3D convolutional neural networks, which is currently limited to few layers andsmall input patches. <cit.> use a 3D CNN with one convolutional and one fully connected layer for the prediction of PET from MRI on patches of 15^3.<cit.> use a combination of 2D and 3D inputs for whole brain segmentation.The network uses one convolutional layer and 3D sub-volumes of size 13^3. The foreground mask, i.e., the region that contains the labels of interest, is assumed to be given, which is not the case for scans without manual segmentation.We propose a 3D deep convolutional network for brain segmentation that has more layers and operates on larger patches than existing 3D DCNNs, giving it the potential to model more complex relationships necessary for identifying fine-grained brain structures.We use latest advances in deep learning to initialize weights, to correct for internal covariate shift, and to limit overfitting for training such complex models.The main contributions in DeepNAT are: 0em * Multi-task learning: our network does not only predict thecenter label of the patch but also the labels in a small neighborhood, formulated in the DCNN as the simultaneous training of multiple tasks * Hierarchical segmentation:we propose a hierarchical learning approach that first separatesforeground from background and then subdivides the foreground into 25 brain structures to account for the class imbalance stemming from the large background class* Spectral coordinates: we introduce spectral coordinates as an intrinsic brain parameterization by computing eigenfunctions ofthe Laplace-Beltrami operator on the brain mask, retaining context information in patches The output of DeepNAT is a probabilistic label map that needs to be discretized to obtain the final segmentation.Performing the discretization independently for each voxel can result in spurious segmentation artifacts.Formulating constraints among voxels, e.g., with pairwise potentials in a random field can improve the final segmentation.Traditionally, such constraints have only been imposed in a small neighborhood due to computational concerns <cit.>.We use the efficient implementation of a fully connected conditional random field (CRF) that establishes pairwise potentials on all voxel pairs <cit.>, which was shown to substantially improve the segmentation.The fully connected CRF is used in combination with DCNNs for natural image segmentation in DeepLab <cit.>.It is also employed for the segmentation of 2D medical images: <cit.> segment vessels in 2D retinal imagesand <cit.> segment the lung in2D CT slices.In contrast to these approaches, we perform MAP inference of the CRF in 3D on the entire image domain to obtain the final brain segmentation. § METHODGiven a novel image I, we aim to infer its segmentation S based on training images = { I^1, …, I^n} with segmentations = {S^1, …, S^n}.A probabilistic label map = {L^1, …, L^η} specifies the likelihood for each brain label l ∈{1, …, η}L^l(x)=p( S(x) = l | I; , ).Let I(_x) denote an image patch centered at location x, the likelihood in a patch-based segmentation approach is L^l(x)=p( S(x) = l | I(_x); , ).We estimate the likelihood by training a deep convolutional neural network, where the patch inference corresponds to multi-class classification. We skull strip the images to focus the prediction on the brain mask; a brain scan from which the skull and other non-brain tissue like dura and eyes are removed.§.§ Hierarchical SegmentationFigure <ref> illustrates the hierarchical approach for whole brain segmentation in DeepNAT.In the first cascade, brain regions are classified into foreground and background.The foreground consists of 25 major brain structures that are illustrated in Figure <ref>.The background is the region within the brain mask that is not part of the foreground.Data that is classified as foreground undergoes the next cascaded step to identify separate brain structures.Given the inherent class imbalance, the hierarchical segmentation has the potential to perform better than a single-step classification, which classifies into brain regions as well as background. Problems with a large background class have previously been noted for atlas-based segmentation <cit.>. The background is typically represented by a large pool of data, while small brain structures are prone to being underrepresented.On our data, we measured a foreground to background volume ratio of about 2 to 1.The background is therefore substantially larger than any of the individual brain structures on the foreground. As data augmentation allows only for crude and poor compensation, the cascaded approach presents a viable alternative. §.§ Network ArchitectureMulti-layer convolutional neural networks pioneered by <cit.> have led to breakthrough results, constituting the state-of-the art technology for many challenges such as ImageNet <cit.>. The underlying idea is to create a deep hierarchical feature representationthat shares filter weights across the input domain.This allows for the robust modeling of complex relationships while requiring a reduced number of parameters, for which solutions can be obtained by stochastic gradient descent.Table <ref> lists the details of the DeepNAT network architecture, where both networks (for each cascade) are identical except for the number of neurons in the last layer (2 and 25, respectively). The network consists of three convolutional layers, where in each layer the filter masks are to be learned.A filter mask is specified by the spatial dimension, , 5 × 5 × 5 and the number of filters to be used, , 64.Each filter extends to all of the input channels. As an example, the filters are of size 5 × 5 × 5 × 32 in the second convolution.The total number of free parameters to be estimated is the filter size times the number of filters, so 5 × 5 × 5 × 32 × 64 for the second convolution.Table <ref> states the number of parameters together with the input and output dimensionality for each layer. Note that for 2D DCNNs the filters have 3 dimensions, whereas for 3D DCNNs the filters have 4 dimensions. Each convolution is followed by a rectified linear unit (ReLU) <cit.>, which supports the efficient training of the network with reduced risk for vanishing gradient compared to other non-linearities.The aim of the convolutional part of the network is to reduce the dimensionality from the initial patch size of 23 × 23 × 23 before entering the fully connected stage.Although each convolution reduces the size, we use an additional max-pooling layer with stride two to arrive at a 3^3 block of neurons at the end of the convolutional stage. The 3^3 block is an explicit design choice.A smaller 2^3 block would cause a lack of localization, with the patch center being split into exterior blocks.A larger 4^3 block would dramatically increase the number of parameters at the end of the convolutional stage, where most free parameters occur at the intersection between convolutional and fully connected layers, see Table <ref>.We use batch normalization at several layers in the network to reduce the internal covariate shift <cit.>.It accounts for the problem that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change, which is more pronounced in 3D networks.We further use two dropout layers, which randomly disable neurons in the network.This helps with the generalizability of the network by acting as a regularizer and mitigating overfitting. To resolve potential location ambiguity, coordinates of the patches are given to the network, see Sec. <ref>.This is achieved by concatenating the image content after the first fully connected layer with the location information in layer 13.In the training stage, we compute the multinomial logistic loss as last layer, where the probability distribution over classes is inferred from the last inner product layer with a softmax.For the initialization of the weights, we use the Xavier algorithm that automatically determines the scale of initialization based on the number of input and output neurons <cit.>. This initialization supports training deep networks without requiring per-layer pre-training because signals can reach deep into the network without shrinking or growing too much.§.§ Multi-task LearningIn Eq.(<ref>), we use an image patch to predict the tissue label of the center voxel. Performing this inference on the entire image results in a single vote per voxel.Previous results in patch-based segmentation have, however, demonstrated the advantage of propagating not only the center label but also neighboring labels <cit.>.With such an approach, the voxel label is not only inferred from a single patch, but also from neighboring patches.<cit.> refer to this as the multi-pointmethod in the context of non-local means segmentation.We propose to replicate the multi-point method for DCNN segmentation by employing multi-task learning.Instead of learning a single task, which predicts the center label, we simultaneously learn multiple tasks, which predict the center and surrounding neighborhood. The neighborhood size determines the number of tasks. While there have been applications for deep multi-task learning <cit.>, we are not aware of previous applications for image segmentation.We implement multi-task learning in the CDNN architecture by replicating the last inner product layer (#18) according to the number of tasks.The increase in the number of parameters to be learned is limited by this setup because all tasks share the same network, except for last inner product layer that specializes on the task. Each task t predicts the likelihood p_t( S(x_t) = l | I(_x); , ) for locations x_t in the neighborhood _x centered around x.We compute the multi-task likelihood for the label by averaging likelihoods across tasks L^l(x)=1/| _x |∑_x_t ∈_x p( S(x) = l | I(_x_t); , ). We experiment with 7 and 27 neighborhood systems for the prediction, where the 7 neighborhood consists of the 6 direct neighbors and the 27 neighborhood consists of the full 3^3 region.From a different perspective, this approach of averaging among multiple predictions per voxel can also be seen as an ensemble method. §.§ Spectral Brain Coordinates A downside of patch-based segmentation techniques is the loss of spatial context <cit.>.Considering the symmetry of the brain, it is easy to confuse patches across hemispheres.In addition, context provides valuable information for structures with low tissue contrast.To increase the spatial information, we augment patches with location information. Previous approaches have, for instance, used Cartesian coordinates <cit.> or distances to centroids <cit.>.We propose spectral brain coordinates as an alternative parameterization of the brain volume, which we obtain by computing eigenfunctions of the Laplace-Beltrami operator inside the 3D brain mask.Eigenfunctions of the cortex surface have previously been used for brain matching <cit.> and eigenvalues as shape descriptors <cit.>. In contrast, we compute spectral coordinates on the solid (volume) and use it as an intrinsic coordinate system for learning. On the brain mask, we solve the Laplacian eigenvalue problem Δ f = - λ f with the Laplace-Beltrami operator Δ, eigenvalues λ and eigenfunctions f.We approximate the Laplace-Beltrami operator with the graph Laplacian <cit.>.The weights in the adjacency matrix W between two points i and j are set to 1 if both points are neighbors and within the brain mask, otherwise they are set to 0.This yields a sparse matrix W.The Laplacian operator on a graph is L = D- WD_ii = ∑_j W_ijwith the node degree matrix D. We compute the first three non-constant eigenvectors of the Laplacian, where each eigenvector corresponds to a 3D image and the ensemble of eigenvectors forms the spectral brain coordinates.Fig. <ref> illustrates the first three eigenvectors, which roughly represent vibrations along primary coordinate axes.The consistency of the coloring across the four subjects highlights the potential for an accordant encoding of location information.Note that the eigenvectors are isometry invariant to the object, meaning that they do not change with rotations or translations.Hence they present an intrinsic parameterization independent of the brain orientation or location.This independence can be seen from the graph construction encoded in the adjacency matrix. The adjacency structure only depends on neighborhood relationships, which do not change with imagetranslation or rotation.Depending on the object to parameterize and the number of eigenfunctions, flipping due to sign ambiguity or swapping of eigenfunctions may hinder a direct comparison.<cit.> proposed an approach for spectrum ordering.In our application, with only computing the first three eigenfunctions of the brain mask, no correction was required.Note that we could also compute more than three eigenfunctions to increase the amount of spatial information in the DCNN, which may require a re-ordering strategy.To the best of our knowledge, this is the first application of eigenfunctions of the 3D solid for defining an intrinsic brain coordinate system. Following the idea of providing the neural net with all the data and letting it pick the relevant information, we input next to the three spectral coordinates also the three Cartesian coordinates. We normalized the Cartesian coordinates, by subtracting the center of mass of the brain mask to make them more comparable across scans.§.§ Fully Connected Conditional Random FieldThe DCNN prediction results in a probabilistic brain segmentation.To obtain the final segmentation, we use maximum a posteriori inference on a conditional random field (CRF).The CRF allows for formulating potentials that ensure label agreement between close voxels with smoothness terms and follow the image content with appearance terms.Traditionally, short-range CRFs with connections between neighboring locations have been used <cit.>, which can however yield excessive smoothing of organ boundaries.In contrast, the fully connected CRF defines pairwise potentials on all pairs of image locations.The vast number of pairwise potentials to be defined makes conventional inference impractical.We use the highly efficient approximate inference algorithm proposed by <cit.>to infer a fully connected CRF model on the entire 3D brain.Key for the efficient computation is the definition of pairwise edge potentials by a linear combination of Gaussian kernels. The inference algorithm uses mean field approximation that is iteratively optimized with a series of message passing steps.Importantly, the message passing updates for a fully decomposable mean field approximation is identical to Gaussian filtering in bilateral space.With the help of efficient approximate high-dimensional filtering <cit.>, the computational complexity of message passing is reduced from quadratic to linear in the number of variables.The Gibbs energy of the CRF model isE() = ∑_i ψ_u(y_i) + ∑_i ≤ jψ_p(y_i,y_j),with the label assignment  and i, j ranging from 1 to the number of voxels.The unary potential ψ_u(y_i) = - log P(y_i) is defined as the negative log likelihood of the label assignment probability from the multi-task DCNN in Eq.(<ref>).We use the pairwise potential from <cit.>, which allows for efficient inference on fully connected graphs.Given image intensities I_i and I_j with locations p_i and p_j, the pairwise potential isψ_p (y_i,y_j)= μ_ij[ v_1 e^ - p_i - p_j ^2/2 σ_α^2- ( I_i - I_j )^2/2 σ_β^2 +v_2 e^- p_i - p_j ^2/2 σ_γ^2].The first exponential term models the appearance where nearby voxels with similar intensity are likely to show the same structure, controlled byspatial σ_α and intensity σ_β parameters; this corresponds to a bilateral kernel.The second exponential term models the smoothness by considering spatial proximity, controlled by σ_γ.The appearance and smoothness terms are weighted by parameters, v_1 and v_2, respectively.For the label compatibility the Potts model is used, μ_ij = [ y_i ≠ y_j ].§ RESULTS We evaluate the segmentation on the dataset of the MICCAI Multi-Atlas Labeling challenge[https://masi.vuse.vanderbilt.edu/workshop2012] <cit.>, which consists of T1-weighted MRI scans from 30 subjects of OASIS <cit.>.Manual segmentations were provided by Neuromorphometrics, Inc.[http://Neuromorphometrics.com/] under academic subscription.The images are 1 mm isotropic with a slice size of 256 × 256 pixels and the number of slices varying above 256.To improve the estimation of the roughly 2.7 million parameters in the network, we increase the number of training scans from 15 in the challenge to 20.The remaining 10 scans are used for testing.We compare our results to PICSL <cit.>, the winner of the MICCAI labeling challenge that uses deformable registration, label fusion, and corrective learning.In addition, we compare to spatial STAPLE <cit.>, which is an extension of the popular simultaneous truth and performance level estimation (STAPLE) method <cit.>.It is among the best performing methods in the challenge and allows for a spatially varying performance of raters, i.e., registered atlas.Finally, we compare to the segmentation with FreeSurfer v5.3 <cit.>.In contrast to the other methods, FreeSurfer comes with its own atlas and does not use the training data.We measure the segmentation accuracy with the Dice volume overlap score <cit.> between the automatic segmentation S and manual segmentations S̅ D(S, S̅) = 2| S ∩S̅ | /|S| + |S̅|. We select a patch size of 23 × 23 × 23 as a trade-off between a large enough image region for the label classification and memory consumption as well as processing speed.DeepNAT is based on the Caffe framework <cit.>. Gradients are computed on minibatches, where each gradient update is the average of the individual gradients of the patches in the minibatch.The size of the minibatch is constrained by the memory of the GPU, where a size of 2,048 fills up most of the 12GB GPU memory on the NVIDIA Tesla K40 and TITAN X used in the experiments.Large batch sizes are advisable as they better approximate the true gradient.We train the network with stochastic gradient descent and the “poly" scheme (also applied by <cit.>) using a base learning rate of 0.01.The actual learning rate at each iteration is the base learning rate multiplied by (1 - iteration / max_iteration )^0.9, promoting larger steps at the beginning of the training period and smaller steps towards the end.For the first network, we randomly sample 30,000 patches from the foreground and background in each training image, yielding 1.2 million training patches.For the second network, we randomly sample at most 3,000 patches per structure, where we double the number of patches for the white matter and gray matter to account for the higher variability in these classes, yielding a total of about 1.1 million training patches.We apply inhomogeneity correction and intensity normalization from the FreeSurfer pipeline to the MRI scans.In light of a small number of training images with manual segmentations, the standardization yields higher homogeneity in the dataset and should therefore facilitate the inference task.We set the CRF parameter to standard settings v_1 = v_2 = 3, σ_α = σ_γ = 3, and σ_β = 10 <cit.>. Figure <ref> shows the accuracy and loss during training for the second network.For the accuracy, we have a different line for each of the seven tasks.Notably, the center task achieves the highest accuracy, where the remaining tasks which predict labels for neighboring voxels show comparable results.This is insofar surprising that all tasks have the same weight in the network and it suggests that it is intrinsically easier for the network to predict the patch center.Overall, we observe a fast convergence to a relatively high classification results, where prolonged training yields a small but steady improvement of the accuracy. First, we evaluate the impact of the proposed contributions in DeepNAT on the segmentation accuracy: (i) coordinates, (ii) hierarchical architecture, and (iii) multi-task learning.We perform the comparison by using the DeepNAT network, which uses seven tasks and combines spectral and Cartesian coordinates.We modify one of the network settings whilekeeping the remaining configuration. Figure <ref> shows the segmentation results, where the statistics are computed across all of the 25brain structures.Each setting is trained for 8 epochs, which takes about 1 day.The segmentation of a new scan at test time takes about 1 hour.With respect to coordinates, we observe a clear drop when using no coordinates.Spectral coordinates perform slightly better than Cartesian coordinates, where the combination of both in DeepNAT yields the highest accuracy.Next we compare the hierarchical approach to directly segmenting the 25 structures in one step, where the one step approach yields a lower accuracy. Finally, we evaluate the importance of multi-task learning.We compare with single task prediction, which only predicts the center voxel of the patch, and with the prediction of a larger number of tasks, 27.The results show in comparison to the seven tasks in DeepNATa strong decrease in accuracy for the single task and a small decrease in accuracy for 27 tasks. We test the significance of DeepNAT to each of the variants with the pairwise non-parametric Wilcoxon signed-rank test (two-sided).The improvement of DeepNAT over only spectral is significant with p<0.05 and the improvement over all other variants is significant with p<0.001. We further evaluated different parameters for the optimization of the network. The reduction of the base learning rate to 0.005 leads to a median Dice score of 0.888.The usage of a minibatch size of 512 yields a median Dice score of 0.895.The application of the ADAGRAD <cit.> stochastic optimization results in a median Dice score of 0.881, compared to 0.897 in DeepNAT.For the second evaluation, we train DeepNAT for 25 epchos, which took about 3 days and compare it to alternative segmentation approaches: FreeSurfer, spatial STAPLE, and PICSL.Figures <ref> shows the results over all 25 brain structures with the median and percentiles and Figure <ref> shows the mean and standard error. DeepNATcrf denotes the estimation of the final segmentation with the fully connected CRF, where for DeepNAT we infer the segmentation independently for each voxel withweighted majority voting. The mean and median Dice for DeepNAT is higher than for FreeSurfer or spatial STAPLE.The CRF yields an increase in Dice by about 0.01 and the overall highest segmentation accuracy.Figure <ref> shows detailed results for all of the 25 brain structures. Across all structures, DeepNATcrf yields significantly higher Dice scores in comparison to DeepNAT (p < 0.001), FreeSurfer (p < 0.001), and STAPLE (p < 0.001).The difference to PICSL (p=0.27) is not significant. We further explore the difference between PICSL and DeepNATcrf on a per structure basis.Here DeepNATcrf yields significantly higher values for left cerebral gray matter (p < 0.005), right cerebral gray matter (p < 0.005),right cerebral white matter (p < 0.05), right cerebellar white matter (p < 0.05), and left caudate (p < 0.05) while PICSL yields significantly higher values for left amygdala (p < 0.01), right caudate (p < 0.05), and left hippocampus (p < 0.01).The different results for the left and right caudate are due to variations in median Dice in PICSL (left: 0.903, right 0.910) compared to more consistent results across hemispheres for DeepNATcrf (left: 0.906, right: 0.908).We note a lower Dice score for the amygdala in comparison to other brain structures across all methods.While the amygdala is a challenging structure to segment, also the small size can entail a lower Dice score. Figure <ref> shows example segmentations for FreeSurfer, PICSL, and DeepNATcrf together with the manual segmentation.The results for PICSL and DeepNATcrf are very similar to the manual segmentation, while FreeSurer shows stronger variations, consistent with the quantitative results.Figures <ref> and <ref> illustrate zoomed in brain segmentations for structures with significant differences between PICSL and DeepNATcrf.In Figure <ref>, segmentations of the cerebral white and gray matter as well as the cerebellar white and gray matter are more accurate with DeepNATcrf, whereas segmentations of the hippocampus and amygdala are more accurate with PICSL.Figure <ref> illustrates the segmentation of the caudate.The segmentation is illustrated by means of a segmentation map that highlights agreement and disagreement with the manual segmentation. Overall, DeepNATcrf is more consistent with the manual segmentation. The convolutional layers in the DCNN can be interpreted as feature extractor from the image patch and the fully connected layers as classifier.To get a better understanding of the feature extraction, we show the learned convolutional filters of the first layer in Figure <ref>. The first layer consists of 32 filters of size 7 × 7 × 7.The learned features are similar to 3D Gabor filters and 3D blobs.This is consistent with previous results on 2D DCNNs that report 2D Gabor filters and 2D color blobs on the first layer <cit.>.We do not include visualizations of filters from the second and third convolutional layers as they are less comprehensible due to the smaller filter size and the more abstract representation. Finally, we reduce the training set from 20 to 15 and increase the testing set from 10 to 15 to have the identical setup to the labeling challenge.We employ data augmentation with jittering to counter the reduction in training data and increase the training time to 50 epochs.Figures <ref> shows the results over all 25 brain structures with the median and percentiles and Figure <ref> shows the mean and standard error.We note a slight overall decrease in accuracy across all methods, compared to Figures <ref> and <ref>, as a result of modifying the testing data.For 15 training and 15 test images, DeepNATcrf yields significantly higher Dice scores in comparison to DeepNAT (p < 0.001), FreeSurfer (p < 0.001), and STAPLE (p < 0.001), whereas the difference to PICSL (p=0.06) is not significant. The median of DeepNATcrf is 0.007 Dice points higher than PICSL, whereas the mean Dice points are the same.The decreasing gap between DeepNATcrf and PICSL in testing accuracy is likely associated with the the reduction of the training set for learning the network.§ DISCUSSION DeepNAT architecture:One of the biggest challenges when working with deep convolutional neural networks is the vast number of decisions to take for the specification of the architecture.Many of the decisions are a trade-off between additional discriminative power of the network and training complexity as well as memory requirements.For instance, we do not use a batch normalization after the first convolution to avoid the high memory consumption.An alternative design for the convolutional stage would have been to work with smaller kernels of size 3 and to build a deeper hierarchy, similar to VGG <cit.>.We have not fully explored this direction, also due to long training times, but initial results did not look very promising. In this work, we used 3D convolutional neural networks for brain segmentation. 3D DCNNs have been used for medical applications before <cit.>, however, the majority of work is on 2D or 2.5D applications. Given that we deal with the segmentation of 3D MRI scans, itseems natural to work with a 3D network for the classification.Yet, working with a 3D network yields an increase in complexity because the convolutional filters and the internal representations have an additional dimension.By employing batch normalization, dropout, and the Xavier initialization, we are able to train 3D networks with more layers than previous 3D DCNNs , where deeper networks can model more complex relationships between input and output data.In many image segmentation tasks, we are facing the challenge of dealing with a large background class that surrounds the structures of interest.The background typically consists of multiple structures that are of no further interest to the application and merged into the background class.For multi-atlas segmentation, we have reported that the dominant background class can cause an under-segmentation of the target structure, because it introduces a bias in the label estimation <cit.>. Here, we address the class imbalance problem with a hierarchical approach by first separating foreground from background and then identifying the individual brain structures on the foreground.Our results show the benefit of this cascaded approach in comparison to directly segmenting brain structures. Location information: A drawback of patch-based segmentation methods is the loss of the larger image context, given that brain scans from different subjects are overall fairly similar. Context information can be crucial for differentiating small image regions across the brain that can appear very similar due to symmetries. To retain context information, we include location information in the network.The results demonstrate that the addition of coordinates leads to a substantial increase in segmentation accuracy.In this work, we introduced spectral brain coordinates, a parameterization of the brain solid with Laplace eigenfunctions, which yielded an improvement over Cartesian coordinates. Interestingly, the combination of spectral and Cartesian coordinates resulted in a further increase in segmentation accuracy, indicating that they contain complementary information.Multi-task learning: Multi-task learning has several applications in machine learning, but we have not yet seen its application for image segmentation.Instead of only predicting the label of the center voxel, we simultaneously learn and predict also the labels of the neighboring voxels.Our results show that multi-task learning yields a significant improvement over single-task segmentation for all brain structures.This is consistent with results from non-local means segmentation, where results from the multi-point method showed improvements over the single-point approach <cit.>.Multi-task learning leads to several predictions per voxel, which can generate more robust segmentations by overruling incorrect predictions. The tasks are learned by sharing the same network, with only the last layer specializing on a single task.This causes only a small increase in the overall number of parameters.We have experienced a faster convergence of the multi-task network compared to the single-task network, which may be attributed to the enforcement of promising gradient directions from all simultaneous tasks.This is consistent with previous observations from multi-task learning for sequence to sequence modeling <cit.>.We observe that the center task has a slightly but consistently higher accuracy than the surrounding tasks.This is surprising because no priority or higher weighting was assigned to the center task.One possible explanation could be that the center location has a larger context but when considering that a patch size of 23 was used, this should not have a strong impact.It rather seems that the convolutional stage of the network with convolution filters and max-pooling better captures the information for predicting the center label.Comparison to state-of-the-art: In our results, we compare to FreeSurfer and two methods from the MICCAI labeling challenge, PICSL and spatial STAPLE.FreeSurfer is one of the most commonly used tools for brain anatomy reconstruction in practice.It performed worse than the other methods in the comparison, however, all other methods used the provided training dataset, whereas FreeSurfer uses its own atlas.Dataset bias may therefore play a role.In addition, the protocol for the manual labeling of the scans may not be entirely consistent.PICSL was the winner of the segmentation challenge and spatial STAPLE was among the best performing methods.Both of these approaches are based on a multi-atlas approach, where all atlas images are registered to the test image.A single registration takes about 2 hours of runtime, so that the registration of all 15 training images takes about 30 hours.The registration can be time-consuming for many image pairs, consequently scaling such methods to larger atlases seems challenging.In contrast, the inclusion of additional training data does not affect testing time for DeepNAT, which is about 1 hour.We trained the final DeepNAT model for about three days on the GPU, but also PICSL is based on an extensive training of the corrective classifier, which was reported with 330 CPU hours.The runtime of DeepNAT could be further improved by using cuDNN and accounting for overlapping patches. The results of DeepNAT resulted in statistically significant improvements over FreeSurfer and spatial STAPLE.DeepNAT in combination with the CRF yielded the overall highest median Dice score, but the improvement over PICSL is not statistically significant.Performing tests on the per structure level resulted in advantages for DeepNAT for cortical structures, which may be explained by the difficulty in registering complex folding patterns.For subcortical structures, the results were not as clear.The variation in significance for the left and right caudate is driven by varying results of PICSL, but the source of the difference is not clear as no preference to one of the hemispheres seems to be given in PICSL. Conditional Random Field: Our results demonstrate the benefit of inferring the final, discrete segmentation from the probabilistic network outcome with the fully connected conditional random field.Previous applications of the fully connected CRF have been for 2D applications.The pairwise constraints formulated in the CRF ensure label agreement between close voxels.In the appearance term of the pairwise potential, we use the difference of voxel intensities as a measure of similarity.Such similarity terms have been extensively studied in spectral clustering for image segmentation <cit.>, where the concept of the intervening contour was proposed <cit.> and adapted for medical image segmentation <cit.>.Integrating the concept of intervening contours into the pairwise potentials of the CRF seems promising to further improve segmentation accuracy.Note that we do not train the CRF, so while DeepNAT is an end-to-end learning system, DeepNATcrf is not.Training Data: One of the big issues when using deep learning in the medical domain is the access to a large enough training dataset.The training set used in our experiments seems small for training a deep convolutional neural network with millions of parameters compared to the millions of images from ImageNet typically used in computer vision.However, DeepNAT does not directly predict the segmentation of the entire image but only of image patches.Working with patches makes the training feasible as each scan contains millions of patches that can be extracted for learning.In the future, it would be interesting to further explore ideas about directly estimating the segmentation of the entire image without the reduction to patches.This can lead to a drastic speed-up, due to the computational overhead when working with overlapping patches.Yet, such an approach would require a much larger number of images with manual segmentations for training, which are very time consuming to create. Due to the limited size of the dataset, we have not split between validation and testing set.We have directly compared the different contributions in DeepNAT (coordinates, hierarchy, multi-task) on the testing set, see Figure <ref>.Consequently, there is a risk of overfitting on the testing data.However, these comparisons involved conceptual design decisions and not a detailed parameter fine-tuning, so we consider the risk of overfitting to be limited. Further, the good performance of DeepNAT persisted after reducing the training dataset to 15 scans and increasing the testing dataset to 15 scans. DeepNAT may be specifically adapted for segmenting young, old, or diseased brains by fine-tuning.The large potential of fine-tuning pre-trained models for deep learning has been shown previously.In the medical imaging domain, <cit.> fine-tuned weights trained on ImageNet to detect lung disease in CT images.<cit.> show that transferability of features, e.g., convnets trained on ImageNet and then fine-tuned to other tasks, depends on how general those features are; the transferability gap increases as the distance between tasks increases and features become less general. Notably, these studies operate on 2D images and we are not aware of work that fine-tunes networks with volumetric input, where the pre-trained models of DeepNAT can provide a first step in this direction. § CONCLUSIONWe presented DeepNAT, a 3D deep convolutional neural network for brain segmentation of structural MRI scans.The main contributions were (i) multi-task learning, (ii) hierarchical segmentation, (iii) spectral coordinates, and (iv) a 3D fully connected conditional random field.Multi-task learning simultaneously learns the label prediction in a small neighborhood.Spectral coordinates form an intrinsic parameterization of the brain volume and provide context information to patches.The hierarchical approach accounts for the class imbalance between the background class and separate brain structures.And finally, the conditional random field ensures label agreement between close voxels.We train the 3D network by integrating latest advances in deep learning to initialize weights, to correct for internal covariate shift, and to limit overfitting for training such complex models.Our results demonstrated the high potential of convolutional neural networks for segmenting neuroanatomy. All in all, image segmentation is a well-suited task for convolutional neural nets, which are arguably at the forefront of the the deep learning wave.The segmentation accuracy of convolutional neural nets is likely to further improve in the future, given the increasing amount of training data, methodological advances for deep networks, and progress in GPU hardware.We believe that the purely learning-based approach with neural networks offers unique opportunities for tailoring segmentations to young, old, or diseased brains.While it may be difficult to obtain enough training data on such specific applications, fine-tuning a pre-trained network seems like a promising avenue. Our extensions to caffe, network definitions and trained networks are available for download: <https://tjklein.github.io/DeepNAT/>.§ ACKNOWLEDGEMENTSupport for this research was provided in part by the Humboldt foundation, SAP SE, Frderprogramm fr Forschung und Lehre, the Bavarian State Ministry of Education, Science and the Arts in the framework of the Centre Digitisation.Bavaria (ZD.B), the National Cancer Institute (1K25CA181632-01), the Massachusetts Alzheimer's Disease Research Center (5P50AG005134), the MGH Neurology Clinical Trials Unit, the Harvard NeuroDiscovery Center, Genentech (G-40819), and the NVIDIA Corporation. § REFERENCES elsarticle-harv
http://arxiv.org/abs/1702.08192v1
{ "authors": [ "Christian Wachinger", "Martin Reuter", "Tassilo Klein" ], "categories": [ "cs.CV", "cs.AI", "cs.LG" ], "primary_category": "cs.CV", "published": "20170227085331", "title": "DeepNAT: Deep Convolutional Neural Network for Segmenting Neuroanatomy" }
Analysis of the coupled Navier-Stokes/Biot problem Aycil Cesmelioglu December 30, 2023 ==================================================This work concerns sampling of smooth signals on arbitrary graphs. We first study a structured sampling strategy for such smooth graph signals that consists of a random selection of few pre-defined groups of nodes. The number of groups to sample to stably embed the set of -bandlimited signals is driven by a quantity called the group graph cumulative coherence. For some optimised sampling distributions, we show that samplingO(log()) groups is always sufficient to stably embed the set of -bandlimited signals but that this number can be smaller – down to O(log()) – depending on the structure of the groups of nodes. Fast methods to approximate these sampling distributions are detailed. Second, we consider -bandlimited signals that are nearly piecewise constant over pre-defined groups of nodes. We show that it is possible to speed up the reconstruction of such signals by reducing drastically the dimension of the vectors to reconstruct. When combined with the proposed structured sampling procedure, we prove that the method provides stable and accurate reconstruction of the original signal. Finally, we present numerical experiments that illustrate our theoretical results and, as an example, show how to combine these methods for interactive object segmentation in an image using superpixels.Graph signal processing, compressive sampling, bandlimited graph signals. § INTRODUCTION This work was initially inspired by studying developments in edit propagation for interactive image or video manipulations, where the goal is to propagate operations made by a user in some parts of the image to the entire image, , propagate foreground/background scribbles for object segmentation <cit.> or propagate manual color/tone modifications<cit.>. First, a graphmodelling the similarities between the pixels of the image is built. Second, the user-edits specify values at some nodes (pixels) of . Finally, the complete signal is estimated by assuming that it is smooth on . The quality of the propagation depends on the structure ofand on the location of annotated nodes. If a part of the image is weakly connected to the rest, the user-edits do not propagate well to this region unless this region is edited directly. Therefore, highlighting beforehand which regions or groups of nodes are important to edit to ensure a good propagation would be a useful feature to facilitate user interactions. Furthermore, designing a fast reconstruction/propagation method is also important so that the user can visualise immediately the effect of his inputs. To this end, “superpixels” – small groups of connected pixels where the image varies only little – are sometimes used among other things to speed up computations, , in <cit.>. We address these problems from a graph signal processing point-of-view <cit.>. More precisely, we view them as a sampling problem where we should select a structured set of nodes (regions to edit), “measure” the signal on this set (user-edits), and reconstruct the signal on the entire graph. We believe that the sampling strategy and the fast reconstruction method proposed here provide useful tools to optimize user's inputs and accelerate computations.As mentionned above, the user-edits are propagated to the entire image by assuming that the global signal is smooth on . In the context of graph signal processing, smooth or -bandlimited signal is a widely used and studied model. Several sampling methods have been designed to sample such signals. Pesenson introduced the notion of uniqueness set (of nodes) for k-bandlimited graph signals in <cit.>. Two different -bandlimited signals are also necessarily different when restricted to a uniqueness set. Therefore, one can sample all -bandlimited signals on a uniqueness set. Then, Anis et al., <cit.> and Chen et al., <cit.> proved that one can always find a uniquess set ofnodes to sample all -bandlimited signals. Finding this set is however computationally expensive. In <cit.>, graph spectral proxies are used to find such a set more efficiently. Yet the combinatorial problem that needs to be solved to find such a set makes the method still difficult to use for large graphs. Other authors used the idea of random sampling to be able to handle large graphs <cit.>. Recently, Puy et al. <cit.> proved that there always exists a random sampling strategy for which sampling O(log()) nodes is sufficient to stably embed the set of bandlimited signals. They also designed a fast and scalable algorithm to estimate the optimal sampling distribution. In this paper, we study first a random sampling strategy for -bandlimited signals where we sample few groups of nodes instead of sampling individual nodes. We introduce the concept of local group graph coherence that quantifies the importance of sampling each group. Second, in order to build a fast reconstruction technique for -bandlimited signals, we use the intuition that a smooth graph signal is a signal that varies slowly from one node to its connected nodes. If we group few connected nodes together, we usually expect a bandlimited signal to be essentially constant on this set of nodes; as long as we do not group together too many weakly connected nodes. We propose here to use this property to accelerate the reconstruction of such -bandlimited signals, , -bandlimited signals nearly piecewise constant over (pre-defined) groups of nodes. When combined with the proposed sampling technique, we prove that this fast method provides stable and accurate reconstructions of the signals of interest. Finally, we illustrate how to use these results for interactive object segmentation in an image, withrequired node groups being superpixels.§.§ Contributions The random sampling strategy that we propose generalises the method proposed in <cit.>. We use here a structured sampling strategy. Let us already acknowledge here that such strategies are also studied in the field of compressed sensing <cit.> and that some of our solutions are directly inspired by these works.First, in this structured sampling setting, we show that the number of groups to sample is directly linked to a quantity called the group graph cumulative coherence. This quantity generalises the concept of graph cumulative coherence introduced in <cit.> and characterises how much the energy of -bandlimited can stay concentrated in each group of nodes.Second, we can then choose to sample the groups non-adaptively or to optimise the sampling distribution to minimise the number of groups to sample. With this optimised sampling distribution, our result shows that, in the worst case, sampling O(log()) groups of nodes is sufficient to ensure the reconstruction of all -bandlimited signals. As each group can contain many nodes, we might have to sample a large number of nodes. This is the potential price to pay when sampling the nodes by groups. Fortunately, a smaller number of groups – down to O(log()) – might already be sufficient if the groups are well designed.Third, we describe a method to estimate the optimal sampling distribution without the need of computing the graph Fourier matrix and which is thus able to handle large graphs.Fourth, estimating the optimal sampling distribution may still be too slow when a large number of groups are involved. We thus also present a sufficient recovery condition that involves a relaxed version of group graph cumulative coherence. The sampling distribution that minimises this relaxed coherence is fast to estimate for large graphs and large number of groups. With this sampling strategy, we prove that sampling O(log()) groups is always sufficient to ensure the reconstruction of all -bandlimited signals. This strategy is mainly interesting at small .Finally, we propose a fast reconstruction method for -bandlimited signals that are also nearly piecewise constant over pre-defined groups of nodes. We show that we can reduce drastically the dimension of the reconstruction problem for such signals. When the above sampling procedure is used to sample them, we prove that the proposed method provides accurate and stable recovery of this type of signals. §.§ ApplicationsThe proposed sampling methods can have interest in several applications. For example, if one needs to develop sensors in a large scale network, it might be easier to deploy and install these sensors at nodes around few spatial locations instead of scattering them all over the network. Finding the best regions to monitor is thus important. In a social network, one might be interested in monitoring a signal defined over communities and thus should find which groups of users are the most important to sample. In semi-supervised learning, it may be easier to label jointly some similar nodes than individual nodes. Let us now detail such an example for semi-supervised classification.The task we consider is interactive object segmentation in an image where, similarly to what is done in <cit.> for instance, we build a graphthat models similarities between the pixels of the image, ask the user to label some regions depending on whether they are part of the object or not, diffuse the result on the complete image by supposing that indicator vectors of the object is smooth on . To propose the regions to label, we view this segmentation problem as a sampling problem of a smooth signal onwhere the regions to label are chosen to ensure the recovery of a -bandlimited signal. To choose the regions to label, we start by partitioning the image into superpixels, , with SLIC technique <cit.>. We see in Fig. <ref> that the superpixels divide the image into homogeneous regions. The superpixels follow the edges and most of them thus belong to either the tiger or the background but rarely both. One interest of dividing the image into superpixels is that it facilitates user interactions. It is easier to determine if a superpixel belongs to the object of interest than if a pixel belongs to the tiger, especially at the boundaries, as recently exploited for segmentation on touch-screens <cit.>. Using the sampling method that we developed we can propose to the user a small number of superpixels to label. Furthermore, this proposition is adapted to the structure of the graph. Another advantage of using superpixels is that the indicator function of the object, beyond being smooth on , is also approximately piecewise constant on the superpixels. We can thus use our reconstruction method to estimate rapidly the segmentation result from the labelled superpixels.§.§ Notations and definitionsWe consider that = {𝒱, ℰ, W} is an undirected weighted graph, where 𝒱 is the set ofnodes, ℰ is the set of edges, and W∈^× is the weighted adjacency matrix with nonnegative entries. We denote the graph Laplacian by ∈^×. We assume thatis real, symmetric, and positive semi-definite, , the combinatorial graph Laplacian := D - W, or the normalised one := I - D^-1/2WD^-1/2. The matrix D∈^× is the diagonal degree matrix and I∈^× is the identity matrix <cit.>. The diagonal degree matrix D has entries D_ii := ∑_i ≠ jW_ij.We denote by ∈^× the orthonormal eigenvectors ofand by 0 = _1 ≤…≤_n the ordered real eigenvalues of . We have = ^, where := (_1, …, _n) ∈^×. The matrixis the graph Fourier basis <cit.>. For any signal ∈^ defined on the nodes of the graph , = ^ contains the Fourier coefficients ofordered in increasing frequencies. This work deals with k-bandlimited signals ∈^ on , , signals whose Fourier coefficients _k+1, …, _ are null. Let _ be the restriction ofto its firstvectors: _ := ( _1, …, _) ∈^×. A signal ∈^ defined on the nodes of the graphis k-bandlimited with ∈∖{0} if ∈(U_), , there exists η⃗∈^k such that = _η⃗. This definition was also used in <cit.>. We assume that _≠_+1 to avoid any ambiguity in the definition of -bandlimited signals.Finally, for any matrix X∈^_1 ×_2, X_2 denotes its spectral norm and X_F its Frobenius norm; when n_1=n_2, λ_ max(X) denotes its largest eigenvalue and λ_ min(X) its smallest eigenvalue. For any vector ∈^_1, _2 denotes the Euclidean norm of . Depending on the context, _j may represent the j^þ entry of the vectoror the j^þ column-vector of the matrix X. The entry on the i^þ row and j^þ column of X is denoted by X_ij. The identity matrix is denoted by I (its dimensions are determined by the context).We present in Fig. <ref> a representation of the important variables and processes involved in this paper in order to facilitate the understanding of the different results. § SAMPLING USING GROUPS OF NODESIn this section, we explain our sampling strategies, starting with the definition of the groups of nodes. §.§ Grouping the nodes We consider that thenodes ofare divided intodifferent groups _1, …, _⊆{1, …, }. The size of the ℓ^þ group is denoted _ℓ. We suppose that these groups form a partition of {1, …, }, so that each node belongs to exactly one group. We have ∪_ℓ=1^N_ℓ ={1, …, },and _ℓ ∩ _ℓ' = ∅. For the object segmentation application discussed in the introduction, these groups represent the superpixels. However, we do not impose the groups to be made of neighbouring nodes in the graph; they can be made of nodes “far” from each other.For each group _ℓ = {_1^(ℓ), …, __ℓ^(ℓ)}, we associate a matrix ^(ℓ) ∈ ^_ℓ× that restricts a graph signal to the nodes appearing in _ℓ, , ^(ℓ)_ij :={[1 forj = _i^(ℓ),;0 otherwise. ]. Note that ∑_ℓ=1^^(ℓ)^^(ℓ) = I.The case of overlapping groups can be handled by changing the definition of ^(ℓ) to ^(ℓ)_ij :={[β__i^(ℓ)^-1/2 forj = _i^(ℓ),;0 otherwise, ]. where 1 ≤β_i≤, i = 1, …,, is the number of times node i appears in the different groups _1, …, _. Equation (<ref>) also holds in this case. All results presented in Section <ref> are valid for overlapping groups with this definition of ^(ℓ). §.§ Sampling the groups The sampling procedure consists in selectinggroups out of theavailable ones. In the application of Section <ref>, it corresponds to the selection of the superpixels to label. We select these groups at random using a sampling distribution on { 1, …, } represented by a vector ∈^. The probability of selecting the ℓ^þ group is _ℓ. We assume that _ℓ > 0 for all ℓ = 1, …,, so that all groups may be selected with a non-zero probability. We obviously have ∑_ℓ=1^_ℓ = 1.The indices Ω := {ω_1, …, ω_} of the selected groups are obtained by drawing independently – thus with replacements –indices from the set {1, …, } according to , , (ω_j = ℓ) = _ℓ,∀ j ∈{1, …, } and ∀ℓ∈{1, …, }. The selected groups are _ω_1, …, _ω_ and the total number of selected nodes is := ∑_j=1^_ω_j. Once the groups are selected, we build the sampling matrix ∈^× that satisfies :=( [ ^(ω_1);⋮;^(ω_) ]), and which restricts any signal to the nodes belonging to the selected groups. For a signal ∈^n defined on the nodes of , its sampled version ∈^ satisfies := .Our goal now is to determine what numberis enough to ensure that all -bandlimited signals can be reconstructed from its sampled version obtained with . To conduct this study, we need to define few more matrices. First, we associate the matrix P^(ℓ) := _ℓ^-1/2 I∈^_ℓ×_ℓ, to each group _ℓ. Then, once the groups are drawn, we construct the block diagonal matrix P∈^× P := ( P^(ω_1), …, P^(ω_)). This matrix takes into account the probability of sampling each group and will be used to rescalefor norm preservation. This matrix ensures that ^-1 _ΩP_2^2 = ^-1 _ΩP_2^2 = _2^2.[This property is a consequence of (<ref>), proved in Appendix <ref>.] Both matrices P and M depend on Ω and are random. §.§ Group graph coherence The sampling procedure in <cit.> is similar to the ones proposed here at the difference that the nodes are sampled individually and not by groups. It was proved there that the number of nodes to sample is driven by a quantity called the graph coherence. This quantity measures how the energy of -bandlimited signals spreads over the nodes. Similarly, we prove here that the number of groups to sample is driven by a quantity that measures the energy of -bandlimited signals spreads over the groups. We now introduce this quantity.The matrix ^(ℓ)_ is the matrix that restricts a -bandlimited signal to the nodes belonging to _ℓ. Therefore,^(ℓ)__2 = sup_η⃗∈^k : η⃗_2=1^(ℓ)_η⃗_2 measures the energy on the nodes _ℓ of the normalised -bandlimited signal that is most concentrated on _ℓ. This energy varies between 0 and 1. When this energy is close to 1, there exists a -bandlimited signal whose energy is essential concentrated on _ℓ. This signal lives only on the nodes in _ℓ and does not spread elsewhere. On the contrary, when this energy is close to 0, there is no -bandlimited signal living only on _ℓ.The sampling distributionis adapted to the graph and the structure of the groups if: whenever ^(ℓ)__2 is high, _ℓ is high; whenever ^(ℓ)__2 is small, _ℓ is small. In other words, the ratio between ^(ℓ)__2 and _ℓ should be as constant as possible. This ensures that the groups where some -bandlimited signals are concentrated are sampled with higher probability. Similarly to what was done in <cit.> with individual nodes, we define the group graph weighted coherence as the largest ratio between ^(ℓ)__2 and _ℓ^-1/2. The group graph cumulative coherence of orderis _ := max_1 ≤ℓ≤{_ℓ^-1/2^(ℓ)__2 }. The quantity ^(ℓ)__2 is called the local group graph coherence. In the extreme case where the groups _1, …, _ reduce all to singletons, we recover the definition of the graph weighted coherence introduced in <cit.>. It is easy to prove that _ is lower bounded by 1. Indeed, for any η⃗∈^ with η⃗_2 = 1, we have 1= _k η⃗_2^2= ∑_ℓ=1^N^(ℓ)_k η⃗_2^2 = ∑_ℓ=1^N_ℓ·^(ℓ)_k η⃗_2^2/_ℓ≤_1 ·max_1 ≤ℓ≤{^(ℓ)_k η⃗_2^2/_ℓ} = max_1 ≤ℓ≤{^(ℓ)_k η⃗_2^2/_ℓ}≤max_1 ≤ℓ≤{^(ℓ)_k_2^2/_ℓ} = _^2. We have _ = 1 in, for example, the degenerated case where _1 = {1, …, }.§.§ Stable embedding We now have all the tools to present our main theorem that shows that sampling O(_^2 log()) groups is sufficient to stably embed the whole set of -bandlimited signals. Hence, it is possible to reconstruct any ∈(_k) from its measurements =. Letbe a random subsampling matrix constructed as in (<ref>) using the groups _1, …, _ and the sampling distribution . For any δ, ξ∈ (0, 1), with probability at least 1-ξ, (1 - δ) _2^2 ≤P _2^2 ≤ (1 + δ) _2^2 for all ∈(_) provided that ≥3/δ^2 _^2log( 2/ξ). See Appendix <ref>.In the above theorem, we recall thatis the number of selected groups, each of them containing several nodes. We thus control the number of groups to sample and not directly the number of nodes. As the lower bound on _ is 1, sampling O(log()) groups might already be sufficient if the groups and the sampling distribution are well-designed.The number of groups to sample is driven by _, which itself depends on the structure of the groups _1, …, _ and on the sampling distribution . To reduce the number of samples to measure, we might optimise the structure of the groups and the sampling distribution. For example, if we were able to construct /L groups (L≥1) such that ^(ℓ)_k_2 ≈ (L/)^1/2 – , no -bandlimited signals have more than 100 · (L/)^1/2 percent of its energy concentrated in each group – then setting _ℓ = L/, ℓ = 1, …, /L, would yield ≈ 1. In this case, sampling one group is enough to embed the set of -bandlimited signals. However, it is not obvious how we can construct such groups in practice and we might not even have the flexibility to modify the structure of the groups. In such a case, the only possibility to reduce the number of measurements is to optimise the sampling distributionto minimise _.The sampling distribution minimizing the coherence _ is the distribution ^* ∈^ that satisfies ^*_ℓ := ^(ℓ)_k_2^2/∑_ℓ'=1^^(ℓ')_k_2^2,for all ℓ∈{1, …, }, and for which _^*^2 = ∑_ℓ=1^^(ℓ)_k_2^2. Indeed, let p⃗' ≠^* be another sampling distribution. As the entries of p⃗' and ^* are nonnegative and sum to 1, we necessarily have p⃗'_ℓ' < _ℓ'^* for some ℓ'>0. Then,_p⃗'^2 ≥p⃗'_ℓ'^-1^(ℓ')_k_2^2 >_ℓ'^*^-1^(ℓ')_k_2^2 = _^*^2, where the last equality is obtained by replacing _ℓ'^* with its value. Therefore, _p⃗' > _^* for any p⃗' ≠^*. As similar proof can be found in, , <cit.> where the authors derive the optimal sampling distribution for a compressive system.We notice that _^*^2 = ∑_ℓ=1^^(ℓ)_k_2^2 ≤∑_ℓ=1^^(ℓ)_k_F^2 = . Hence, by using this distribution, (<ref>) shows that sampling O(log()) groups is always sufficient to sample all -bandlimited signals. The exact number is proportional to _^*^2 log(). This is not in contradiction with the fact that at leastmeasurements are required in total as one group contains at least one node. We also have _^*^2 = ∑_ℓ=1^^(ℓ)_k_2^2 ≤, as ^(ℓ)_k_2^2 ≤ 1. Therefore, in any case, the bound never suggests to sample much more thangroups, as one would expect it.We recall that the results in <cit.> proves that it is always sufficient to sample O(log()) nodes to embed the set of -bandlimited signals. When sampling the nodes by groups, it is the number of groups to sample that should be O(log()). This can be a large number of individual nodes but this is the potential price to pay when sampling the nodes by groups.Variable density sampling <cit.> and structured sampling <cit.> are also important topics in compressed sensing. The method proposed here is closely inspired by these studies, especially by <cit.>. Our results thus share several similarities with these works. We however benefit from a simpler signal model and take advantage of the graph structure to refine the results, propose simpler decoders to reconstruct the signal, and design efficient algorithms to estimate ^*. §.§ A more practical result at small k's Optimising the sampling distribution reduces to estimating the spectral norm of the matrices ^(ℓ)_. We will present in Section <ref> a method avoiding the computation of _. However, the method might still be too slow when a large number of groups is involved as it requires an estimation of a spectral norm for each group separately. It is thus interesting to characterise the performance of the proposed method using other quantities easier to compute. In this section, we present results involving the following quantity _ := max_1 ≤ℓ≤{_ℓ^-1/2^(ℓ)__F }. The only difference between _ and _ is that we substituted the Frobenius norm for the spectral norm. As X_2 ≤X_F for any matrix X, we have _≤_; hence the results involving _ will be more pessimistic than those involving _. We have _≥. Indeed, k= _k_F^2= ∑_ℓ=1^N^(ℓ)_k_F^2 = ∑_ℓ=1^N_ℓ·^(ℓ)_k_F^2/_ℓ≤_1 ·max_1 ≤ℓ≤{^(ℓ)_k_F^2/_ℓ} = _^2. As with _, the lower bound is also attained in, for example, the degenerated case where _1 = {1, …, }. As _≤_, we have the following corollary to Theorem <ref>. Letbe a random subsampling matrix constructed as in (<ref>) using the groups _1, …, _ and the sampling distribution . For any δ, ξ∈ (0, 1), with probability at least 1-ξ, (<ref>) holds for all ∈(_) provided that ≥3/δ^2 _^2log( 2/ξ).As _≥_, (<ref>) implies (<ref>). Theorem <ref> then proves that (<ref>) holds with probability at least 1-ξ for all ∈(_). The sufficient condition (<ref>) can be much more pessimistic than (<ref>). Indeed, as we have _^2 ≥, Condition (<ref>) suggests to always sample more than O(log()) groups, while we know that sampling O(log()) can be enough. The interest of this result is thus in the regime whereis small.As we have done it with _, we can also optimiseto minimise _. The sampling distribution that minimises _ is the distribution q⃗^* satisfying q⃗^*_ℓ := ^(ℓ)_k_F^2/,for alll ∈{1, …, }, and for which _q⃗^*^2 = . With this distribution, (<ref>) proves that sampling O(log()) groups is always enough to sample all -bandlimited signals. This result is particularly interesting at small 's because estimating q⃗^* is much easier and faster than estimating ^* (see Section <ref>).In some particular cases, this result can be interestingalso at large 's. Indeed, (<ref>) is a pessimistic bound. In reality, we may have _q⃗^*^2 ≤ so that, according to (<ref>), fewer samples than O(log()) are actually sufficient when using q⃗^*. Furthermore, q⃗^* might actually be close to ^*, in which case we would reach almost optimal results with q⃗^*. The occurrence of these events is however quite difficult to predict and depends on the structure of the groups and of the graph.Note that we need the knowledge of the truncated Fourier matrix _k to compute the distributions p⃗^* and q⃗^*. Computing _k can be intractable for large graphs. We will present a method to overcome this issue in Section <ref>. We continue first by explaining how to estimatefrom its measurements.§ FAST RECONSTRUCTIONOnce the signal has been sampled, we shall also be able to reconstruct it. In <cit.>, the authors propose to estimate the original signal by solving min_z⃗∈^P(z⃗ - )_2^2 +z⃗^ g() z⃗, where >0 and g is a nonnegative nondecreasing polynomial. We have g() := ∑_i=0^d α_i^i = ∑_i=0^d α_i^i ^, where α_0, …, α_d∈ are the coefficients of the polynomial and d ∈ its degree. These polynomial's parameters can be tuned to improve the quality of the reconstruction. The role of g can be viewed as a filter onand should ideally be a high-pass filter. The matrix P is introduced to account for the RIP. The advantage of this method is that it can be solved efficiently even for large graph for example by conjugate gradient. Indeed, each step of this algorithm can be implemented by using only matrix-vector multiplications with P and , which are both sparse matrices. The matrix g() does not have to be computed explicitly.For brevity, we do not recall the theorem proving that (<ref>) provides accurate and stable recovery of all -bandlimited signals. However, this theorem applies as soon as the restricted isometry property (<ref>) holds, and thus applies here when (<ref>) holds. The reconstruction quality is controlled by g(_) and the ratio g(_)/g(_+1). One should seek to design a filter g such that these quantities are as close as possible to zero to improve the reconstruction quality.We propose now a method to obtain a faster estimation of the original signal when it is nearly piecewise constant. §.§ Piecewise constant graph signals Before continuing, we want to stress that we consider non-overlapping groups _1, …, _ in this rest of Section <ref>.If a graph signal is nearly piecewise constant over the groups _1, …, _ then reconstructing the mean values of this signal for each group is enough to obtain a good approximation of the original signal. Instead of estimatingunknowns, we reduce the estimation tounknowns. When ≪, this is a large reduction of dimension yielding a significant speed up and memory reduction. The fact that a signalis piecewise constant over the groups _1, …, _ is characterized as follows. We construct the averaging row-vectors ^(ℓ)∈^1 × that satisfy ^(ℓ) := 1⃗^^(ℓ)/_ℓ^1/2, and the matrix :=( [ ^(1);⋮;^() ]) ∈^×. As the groups do not overlap, we have ^ = I,hence _2 = 1. Applyingtoprovidesvalues, each one of them corresponding to the sum of the values ofwithin the group _ℓ, scaled by _ℓ^-1/2. Then, applying ^ togives an approximation ofwhere the values in the vector ^ are constant within each group; this is a piecewise constant vector over the groups. The value of ^ appearing within the group _ℓ is exactly the average ofwithin _ℓ. Saying thatis nearly constant within each group corresponds to assume that^ - _2 ≤ϵ_2, where ϵ≥ 0 is a small value. The signal model of interest in this section is thus := {∈(_) |(^ - I)_2 ≤ϵ_2 }. §.§ Reducing the dimension To build a fast algorithm exploiting the above property, we use a reconstruction method similar to (<ref>) but involving vectors of smaller dimension. We define the averaged vector := ∈^ of dimension . As ∈, estimatingis enough to get a good approximation of– we just need to multiply it with ^. Furthermore, asis nearly piecewise constant over the groups _1, …, _, by construction of the matrix , the measurement vector = is also almost piecewise constant over the sampled groups _ω_1, …, _ω_. We thus averageover these groups by multiplying it with the matrix ∈^× that satisfies _ji :={[ _ω_j^-1/2 for ∑_j'=1^j-1_ω_j'≤i ≤∑_j'=1^ℓ_ω_j',; 0otherwise. ]. We obtain := = ∈^. We now have to linkto . We create the matrix ∈^× that restricts themean value ofto themean value of the selected groups, , _j i :={[1 ifi = ω_j,;0 otherwise. ]. We have therefore =.The goal is now to estimatefrom . To ensure that the reconstruction method is stable to measurement noise, we do not consider the perfect scenario above but instead the scenario where =+ , and ∈^ models noise. We now need a regularisation term to estimate . We obtain this term by reducing the dimension of the regularisation involving the Laplacianin (<ref>). We compute := g()^ = () g() ()^∈^×. Note thatis a symmetric positive definite matrix. Like g(), it can thus be used as a regularisation. We thus propose to estimateby solving min_z̃⃗̃∈^P (z̃⃗̃ - )_2^2 +z̃⃗̃^z̃⃗̃, where >0 and P∈^× is the diagonal matrix with entries satisfying P_jj := _ω_j^-1/2. Let ^* ∈^ be a solution of (<ref>). We finally obtain an estimation ofby computing ^^*. In the particular case where g(·) is the identity, one can notice that_ℓℓ' = ∑_(i,j) ∈_ℓ×_ℓ'_ij/_ℓ^1/2_ℓ'^1/2 is non-zero only if there is at least one edge in E joining the groups _ℓ and _ℓ'. The Laplacian _ℓℓ' preserves the connections present in the original graph represented by . The dimension of the unknown vector in (<ref>) is , which can be much smaller than . This leads to a large gain in memory and computation time when eitheror matrix-vector multiplications withcan be computed rapidly. If g has a small degree, thencan be computed explicitly in a short amount of time. In such a case, we can solve (<ref>) faster than (<ref>) as it involves matrices of (much) smaller dimensions. In general, it is however not always straightforward to find an a efficient implementation for matrix-vector multiplications withwithout temporary going back to the signal domain of dimension , , multiplying the vector z̃⃗̃ with ^, filtering the high dimensional signal ^z̃⃗̃, and downsampling the result. Even though solving (<ref>) might still be faster than solving (<ref>) in this situation, we loose part of the efficiency by working temporarily in the high dimensional domain. We thus have less flexibility in the choice of g with this reconstruction technique.Let us also mention that (<ref>) can be used to initialise the algorithm used to solve (<ref>) with a good approximate solution. As in multigrid approaches to solve linear systems of equations, see, , <cit.>.The following theorem bounds the error between the signal recovered by (<ref>) and the original vector. Let Ω = {ω_1, …, ω_} be a set ofindices selected independently from { 1, …, } using a sampling distribution ∈^, , P, , P be the associated matrices constructed respectively in (<ref>), (<ref>), (<ref>) and (<ref>), and M_ max>0 be a constant such that P_2 ≤ M_ max. Let ξ, δ∈ (0, 1) and suppose thatsatisfies (<ref>). With probability at least 1-ξ, the following holds for all ∈, all ∈^, all > 0, and all nonnegative nondecreasing polynomial functions g such that g(_+1) > 0.Let ^* be the solution of (<ref>) with =+. Define α⃗^* := __^ ^^* and β⃗^* := (I - __^)^^*. Then, α⃗^* - _2≤ √( (1-δ)) ·[( 2 +M_ max/√( g(_+1)))P_2 + ( M_ max√(g(_)/g(_+1)) + √( g(_))) _2 . +ϵ( 2 M_ max + M_ max√(g(_)/g(_+1)) + √( g(_))) _2 ], and β⃗^*_2≤ 1/√( g(_+1))P_2 +√(g(_)/g(_+1))_2 +ϵ √(g(_)/g(_+1))_2. See Appendix <ref>.The vector α⃗^* is the orthogonal projection of ^^* onto (_). The vector β⃗^* is the projection of ^^* onto the orthogonal complement of (_). There are several remarks to make about the above theorem: * Theorem <ref> shows that the result obtained via (<ref>) is similar to the one we would have obtained by solving (<ref>) – see <cit.> for the error bounds – with additional errors controlled by ϵ. We recall that ϵ characterises how faris from a piecewise constant signal. As expected, the smaller ϵ, the better the reconstruction. * The reconstruction quality improves when g(_) and the ratio g(_)/g(_+1) go to 0, and when g(_)/g(_+1) tends to 1. We recall that we have g(_) ≥ g(_+1) > g(_) by assumption. * The effect of the noisedecreases when g(_+1) increases, and, obviously, γ should be adapted to the signal-to-noise ratio.Let us mention that the idea of “coarsening” a graph and the signals that live on it using a partition into different groups of nodes can also be found in <cit.>, where a multiresolution analysis method of graph signals is proposed. The coarsening method is however different than the one used here.§ OPTIMAL SAMPLING ESTIMATIONIn this section, we come back to the sampling process of Section <ref> and leave the reconstruction problem. We explain how to estimate the sampling distributions ^* and q⃗^* of Section <ref> without computing the truncated Fourier matrix _, as this computation is intractable for large graphs. The methods below only involve matrix-vector multiplications with the sparse Laplacian matrixand are thus computationally tractable even for large .§.§ Estimation of ^* The distribution ^*, defined in (<ref>), that minimises the coherence _is entirely defined by the values of ^(ℓ)_k_2 for ℓ = 1, …,, which are thus the quantities we need to evaluate.We recall that, in graph signal processing, a filter is represented by a function h: →, and that the signalfiltered by h is _h :=(ĥ⃗̂)^∈^, where ĥ⃗̂ = (h(_1), …, h(_))^∈^. To filter the signalwithout actually computing the graph Fourier transform of , we can approximate the function h by a polynomialr(t) := ∑_i=0^d α_i t^i ≈ h(t) of degree d, and compute _r instead of _h. The filtered signal _r is computed rapidly using the formula _r = ∑_i=0^d α_i (_1^i, …, _^i)^ = ∑_i=0^d α_i^i , that involves only matrix-vector multiplications with the sparse Laplacian matrix . We let the reader refer to <cit.> for more information on this fast filtering technique. For any polynomial function r of the form above and any matrix A∈^×, we define r(A) := ∑_i=0^d α_iA^i . Note that r() = r()^. Let i__k : → be the ideal low-pass filter at cutoff frequency _k, , the filter that satisfies i__k(t) = {[1 ift ≤_k,;0 otherwise. ]. We have _k _k^ = i__k(). Then, we notice that ^(ℓ)_k_2^2 = ^(ℓ)_k _k^^(ℓ)^_2 = ^(ℓ)i__k()^(ℓ)^_2. We recall that ^(ℓ) is the matrix that restricts the signal to the nodes belonging to _ℓ. The matrix appearing on the right hand side of the last equality corresponds to the linear operator that 1) extends a vector on the complete graph by inserting 0 in all groups ℓ' ≠ℓ, 2) low-pass filters the extended signal, 3) restricts the result to the group _ℓ. This process can be approximated by replacing the ideal low-pass filter i__k with a polynomial approximation ĩ__k of i__k and^(ℓ)i__k()^(ℓ)^_2 ≈^(ℓ) ĩ__k()^(ℓ)^_2. To estimate ^*, we estimate the spectral norm of the matrix appearing on the right hand side for which matrix-vector multiplication is fast. The quality of the approximation depends obviously on the choice of the polynomial ĩ__k.Estimating ^(ℓ) ĩ__k()^(ℓ)^_2 amounts to computing the largest eigenvalue of ^(ℓ) ĩ__k()^(ℓ)^ which can be done, , by using the power method. This method requires matrix-vector multiplication only with ^(ℓ) and ĩ__k() and is thus fast. Finally, the approximation p̅⃗̅∈^ of ^* satisfies _ℓ := λ_ max(^(ℓ) ĩ__k()^(ℓ)^)/∑_ℓ'=1^λ_ max(^(ℓ') ĩ__k()^(ℓ')^).Note that an estimation of _ is required beforehand to define the filter ĩ__k. We estimate this value using the dichotomy method presented in <cit.>. §.§ Estimation of q⃗^* Computingrequires the estimation ofeigenvalues. Even though these estimations can be done in parallel, this process might still be too slow for certain applications. As explained before, whenis small, we can use the sampling distribution q⃗^* in (<ref>) that minimises _p⃗. This distribution is faster to compute than .We start by noticing that we have ^(ℓ)_k_F^2 = ∑_i ∈_ℓ_^δ⃗_i_2^2 for each group ℓ = 1, …,. The vector δ⃗_i ∈^ is the unit vector that is null on all nodes except at node i. Hence, we only need an estimation of _^δ⃗_i_2^2, i = 1, …,, to estimate q⃗^*. An algorithm was already proposed in <cit.> to estimate these values. We let the reader refer to Algorithm 1 in <cit.> for the details of the method. We just recall that this estimation is obtained by filtering O(log()) random signals with a polynomial approximation of i__. Finally, our estimation q̅⃗̅∈^ of q⃗^* has entries q̅⃗̅_ℓ := ∑_i ∈_ℓ_^δ⃗_i_F^2/∑_ℓ=1^∑_i ∈_ℓ_^δ⃗_i_F^2, where each _^δ⃗_i_2^2 are estimated by Algorithm 1 in <cit.>.This estimation is faster than forbecause the power method is an iterative method that involves one filtering at each iteration. Furthermore, the power method is run independently for each group _ℓ. In total, (much) more than _ℓ filterings are thus required. On the contrary, for q̅⃗̅, we just need to filter O(log()) signals to obtain the estimation. In most situations, we already have O(log()) ≤_ℓ and computing q̅⃗̅ is thus faster than computing . § EXPERIMENTSIn this last section, we first test our sampling strategies on two different graphs to illustrate the effect of the different sampling distributions on the minimum number of samples required to ensure that the RIP holds. Then, we apply our sampling strategy for user-guided object segmentation. In this application, we also test the different recovery techniques proposed in Section <ref>. §.§ Sampling distributions We perform experiments on two different graphs: the Minnesota graph of size = 2642 and the bunny graph of size = 2503. Both graphs are presented in Fig. <ref> and are available in the GSP toolbox <cit.>. For each graph, we group the nodes using the spatial coordinates associated to each node. For the Minnesota graph, we divide the space into 100 cells and group the nodes that fall in the same cell. After removing empty cells, we obtain the = 73 groups represented in Fig. <ref>. For the bunny graph, we obtain = 213 groups with a similar procedure (see Fig. <ref>).For each graph, we compute the combinatorial Laplacian and _k for different values of k. Then, we compute the lower RIP constant, , the constant δ_k>0 that satisfies δ_k= 1 - inf_∈(_)x_2 = 1 P _2^2. This constant is the smallest value that δ can take such that the left-hand side of the RIP (<ref>) holds. Remark that δ_k = 1 -λ_ min( _k^^P^2 _k). We estimate δ_k for 500 independent draws of the set Ω, which defines the matrices P, and different numbers of selected groups . All samplings are done in the conditions of Theorem <ref> using the sampling distributions u⃗, p⃗^*, and q̅⃗̅. The vector u⃗ denotes the uniform distribution over {1, …, }. When conducting this experiment with the estimated distributionsand q̅⃗̅, we re-estimate these distributions at each of the 500 trials. These distributions are estimated using Jackson-Chebychev polynomials of order 50 <cit.>. For the Minnesota graph, we consider the bandlimits = 5, 10, 20. For the bunny graph, we consider the bandlimits = 10, 25, 50.We present the probability that δ_k is less than 0.995, estimated over the 500 draws of Ω, as a function ofin Fig. <ref>. For the Minnesota graph, the performance is better when using the optimal distribution ^* than when using the uniform distribution u⃗ for all k, which is in line with the theory. The estimatedand q̅⃗̅ yield performance equivalent to ^*. This confirms that we can achieve similar sampling performance without having to compute the Fourier matrix _, which, we recall, is intractable for large graphs. This also shows that q̅⃗̅ can lead to nearly optimal results. For the bunny graph, all sampling distributions yield essentially the same results at all bandlimits. We notice a slight improvement at k=50 when using , q̅⃗̅ or ^* instead of u⃗.For illustration, we present in Fig. <ref> examples of computed sampling distributions p⃗^*,and q̅⃗̅. All sampling distributions exhibit similar structures, which explains why they all yield about the same performance in our experiments.§.§ Object segmentation§.§.§ ProtocolWe now test our method for interactive object segmentation. We consider the image of size = 321 × 481 presented in Fig. <ref> for which our goal is to segment the tiger. The ground truth segmentation map ∈{0, 1}^ is presented in Fig. <ref>. The value 1 (white) indicates the presence of the tiger and the value 0 (black) stands for the foreground. The original image and the ground truth image are part of the dataset available[<http://www.ntu.edu.sg/home/asjfcai/Benchmark_Website/benchmark_index.html>.] in <cit.>. Our objective is to recover the original mapfrom few user-inputs. To facilitate the interactions with the user, we divide the original image into the = 600 superpixels showed in Fig. <ref> and computed with SLIC <cit.>, choose a small number of superpixels at random and ask the user to label these superpixels: 1 if the superpixel belongs to the tiger; 0 otherwise.The graphused to propagate the user-labels to the complete image is constructed as follows. We build a feature vector for each pixel by extracting a color RGB patch of size 3 × 3 around the pixel, transform this patch in vector form, and augment this vector with the absolute 2D coordinates of the pixels in this extracted patch. This yieldsfeature vectors g⃗_i ∈^45, i=1, …,. We then connect each feature vector to its 9 nearest neighbours (in the Euclidean sense), which gives a set of 9 edges E. The adjacency matrix W∈^× satisfies W_ij := exp [- g⃗_i- g⃗_j_2^2/σ^2], where σ > 0 is the 25^þ percentile of the set {g⃗_i- g⃗_j_2 : (i, j) ∈E}. We finally symmetrise the matrix W and compute the combinatorial Laplacian ∈^×.We study three strategies to choose the superpixels. The first strategy consists in choosing the superpixels uniformly at random, , using the sampling distribution u⃗. The second and third strategies consist in choosing the superpixels with respectively the optimised distributions q̅⃗̅ and , which we evaluate at _0 = 50 using Jackson-Chebychev polynomials of order 75 <cit.>. For illustration, we present in Fig. <ref> the estimated values ^(ℓ)__0_F^2 and ^(ℓ)__0_2^2, which define the optimised distributions q̅⃗̅ and p̅⃗̅, respectively. Both distributions indicate that one should label more superpixels around the tiger. The distribution q̅⃗̅ “focuses” however more on specific regions, like the head of tiger. The distributionspreads the measurements over the entire tiger more uniformly.We emulate user-interactions as follows. For each chosen superpixel, we compute the mean of the ground truth mapwithin this superpixel. If the mean value is larger than 0.5, we label the superpixel as part of the tiger. Otherwise, we label the superpixel as part of the background. This strategy obviously introduces noise if some superpixels cover part of the background and of the tiger. Once the labelling is done, we have access to the measurement vector ∈^ from which we want to reconstruct . We repeat this procedure for ∈{50, 70, …, 250}. For each , we also repeat the experiments 50 times with independent draws of the superpixels. We draw the superpixels with replacements in all cases.To reconstruct the original map , we first use the fast reconstruction method (<ref>) and then refine the solution at the pixel level with (<ref>), using the solution of the first minimisation problem as initialisation to solve the second minimisation problem. We choose g() = and solve both problems in the limit where → 0. In this limit, the problems (<ref>) and (<ref>) become min_z̃⃗̃∈^ z̃⃗̃^z̃⃗̃ subject to z̃⃗̃ = and min_z⃗∈^ z⃗^ g() z⃗ subject to z⃗ = , respectively. Both problems are solved using FISTA <cit.>. The same stopping criteria are used for all experiments. §.§.§ Results We present in the top panel of Fig. <ref> the reconstruction snr obtained with the different methods. The reconstruction snr is defined as -20log_10(- - ^*_2/_2), where ^* is the reconstructed signal.We notice that the snr attained with the fast decoder (<ref>) is very similar to the snr attained with (<ref>). We also remark that the optimised distributionsand q̅⃗̅ yield better reconstructions than the uniform distribution u⃗. The mean reconstruction snr is slightly better withthan with q̅⃗̅ at s ≥ 150.We present the computation time of each method in the bottom panel of Fig. <ref>. We notice that solving (<ref>) is much faster than solving (<ref>), while they yield almost the same quality. This highlight the interest of the fast reconstruction technique. It is also interesting to note that it is faster to solve (<ref>) when the measurements are drawn withor with q̅⃗̅ than with u⃗. The reason is probably a better initialisation of (<ref>) or a better “quality” of the measurements with the optimised distributions than with the uniform distribution.Finally, we present in Fig. <ref> some examples of reconstructions from s=150 sampled superpixels for each method. We notice that the optimised sampling distributions improve the reconstruction ofaround the head and tail of the tiger, , where the optimised distributions have higher values. With a uniform distribution, the structure of the graph makes it difficult to reconstructaround the head and tail from the values of other superpixels. The optimised sampling distribution compensate this issue by favouring this area when selecting the measurements.§ DISCUSSION AND CONCLUSION We presented a structured sampling strategy for -bandlimited signals where the nodes are selected by groups. We proved that the local group graph cumulative coherence quantifies the importance of sampling each group to ensure a stable embedding of all -bandlimited signals. Finally, we presented a fast reconstruction technique for -bandlimited signals which are also nearly piecewise-constant over pre-defined groups of nodes.Among the possible applications of these methods, we believe that they can also be useful to accelerate the compressive spectral clustering method proposed in <cit.>. After having computed some features vectors for each nodes, this compressive method works by downsampling the set of features of vectors, performing k-means on this reduced set to findclusters, and interpolating the clustering results on all nodes by solving (<ref>). To accelerate the method, one could 1) pre-group similar nodes to formgroups such that ≤≪, , by running few iterations of the k-means algorithm; 2) subsample this set ofgroups; 3) cluster this subset to findclusters; and 4) solve (<ref>) to cluster all nodes. If the overhead of computing thegroups is small, this method has the potential to be faster than the original compressive spectral clustering method.Finally, we would like to discuss two limitations in the proposed methods. First, the optimal sampling distribution depends on the parameter . In some applications, the final result may change a lot depending on the value ofwhich was chosen to compute this distribution. Finding a range of values ofwhich give acceptable and stable results is thus an important step in every application. Second, the estimation of the optimal sampling distribution depends on the quality of the polynomial approximation of the ideal low-pass filter _λ_. It is sometimes necessary to use a polynomial of large degree to get a correct estimation, which limits the computational efficiency of the proposed methods. In such cases, it would be especially useful to find more efficient alternatives to estimate the distributions ^* and q⃗^*.§[hang]Appendix 0mm[]§- PROOF OF THE THEOREM <REF>§ PROOF OF THE THEOREM <REF>As done in <cit.>, the proof is obtained by applying the following lemma obtained by Tropp in <cit.>. Consider a finite sequence {X_j } of independent, random, self-adjoint, positive semi-definite matrices of dimension d × d. Assume that each random matrix satisfies λ_ max(X_j) ≤ R almost surely. Define μ_ min := λ_ min( ∑_jX_j )and μ_ max := λ_ max( ∑_jX_j ). Then {λ_ min( ∑_j X_j )≤ (1 - δ) μ_ min} ≤ d[ ^-δ/(1-δ)^1-δ]^μ_ min/R for δ∈ [0, 1], and{λ_ max( ∑_j X_j )≥ (1 + δ) μ_ max} ≤ d[ ^δ/(1+δ)^1+δ]^μ_ max/R for δ≥ 0.We will also use the facts that, for all δ∈ [0, 1], [ ^-δ/(1-δ)^1-δ]^μ_ min/R ≤ exp( - δ^2 μ_ min/3 R)and [ ^δ/(1+δ)^1+δ]^μ_ max/R ≤ exp( - δ^2 μ_ max/3 R).We start by noticing that _^^PP_ = ∑_j = 1^(_^^(ω_j)^P^(ω_j)) (P^(ω_j)^(ω_l)_). We define X_j := 1/(_^^(ω_j)^P^(ω_j)) (P^(ω_j)^(ω_j)_)and X := ∑_j=1^X_j =_^^P^2 _. The matrix X is thus a sum ofindependent, random, self-adjoint, positive semi-definite matrices. We are in the setting of Lemma <ref>. We continue by computing X_j and λ_ max(X_j).The expected value of each X_j is X_j=[ 1/(_^^(ω_j)^P^(ω_j)) (P^(ω_j)^(ω_j)_) ] = _^(∑_ℓ=1^_ℓ(^(ℓ)^P^(ℓ)) (P^(ℓ)^(ℓ))) _ =_^(∑_ℓ=1^^(ℓ)^^(ℓ)) _ =_^_ =I. Therefore,λ_ min( ∑_j X_j) = 1 and λ_ max( ∑_j X_j) = 1. Furthermore, for all j = 1, …,, we have λ_ max (X_j) = X_j_2≤max_1 ≤ℓ≤P^(ℓ)^(ℓ)_/_2^2 = 1/ max_1 ≤ℓ≤{^(ℓ)__2^2/_ℓ} = _^2/. Lemma <ref> yields, for any δ∈ (0, 1), {λ_ min( X)≤ (1 - δ) }≤ ·[ ^-δ/(1-δ)^1-δ]^/_^2 ≤exp( - δ^2 /3_^2)and{λ_ max( X)≥ (1 + δ) }≤ ·[ ^δ/(1+δ)^1+δ]^/_^2 ≤exp( - δ^2 /3_^2). Therefore, for any δ∈ (0, 1), we have, with probability at least 1 - ξ, 1 - δ≤λ_ min( X)andλ_ max( X) ≤ 1+δ provided that (<ref>) holds. Noticing that (<ref>) implies that (1-δ) α⃗_2^2 ≤P_α⃗_2^2≤(1+δ) α⃗_2^2, for all α⃗∈^k, which is equivalent to (<ref>) for all ∈(_k), terminates the proof.§- PROOF OF THEOREM <REF>§ PROOF OF THEOREM <REF>In order to prove Theorem <ref>, we need to establish few properties between the different matrices used in this work.The first useful property is P = P. Indeed, for any z⃗∈^, the j^þ entry of Pz⃗ is ( Pz⃗)_j = 1⃗^^(ω_j)z⃗/( _ω_j _ω_j)^1/2. Then, the j^þ entry of Pz⃗ is the scaled sum of the values in the j^þ sampled group appearing in Pz⃗, which is _ω_j^-1/2^(ω_j)z⃗. From the definition of , the sum is scaled by _ω_j^-1/2. Therefore, ( Pz⃗ )_j = ( Pz⃗ )_j for all j ∈{1, …, }, which terminates the proof.The second property is^ = I, which implies _2 = 1.The third property is P^z̃⃗̃_2=P^z̃⃗̃_2, for all z̃⃗̃∈^. To prove this property, we remark that P^z̃⃗̃ =^Pz̃⃗̃. Indeed, the entries in P^z̃⃗̃ corresponding to the first selected group _ω_1 are all equal to (_ω_1 _ω_1)^-1/2z̃⃗̃_ω_1. There are _ω_1 such entries. One can also notice that the first _ω_1 entries of ^Pz̃⃗̃ are also equal to (_ω_j _ω_1)^-1/2z̃⃗̃_ω_1. Repeating this reasoning for all the sampled groups proves the equality. On the one side, we thus have P^z̃⃗̃_2 =^Pz̃⃗̃_2 = Pz̃⃗̃_2, where we used (<ref>). On the other side, we have P^z̃⃗̃_2 =P^z̃⃗̃_2 = Pz̃⃗̃_2, where we used (<ref>) and (<ref>). This terminates the proof.As ^* is a minimiser of (<ref>), we have P(^* - )_2^2 + (^*)^^*≤ P( - )_2^2 +^. To prove the theorem, we need to lower and upper bound the left and right hand sides of (<ref>), respectively. We start with the bound involvingand then with the ones involving .[Bounding the terms in (<ref>) involving ]. We define the matrices _:= ( _+1, …, _) ∈^× ( - ), G_:= ( g(_1), …, g(_) ) ∈^×, G̅_ := ( g(_+1), …, g(_) ) ∈^( - ) × ( - ). By definition of α⃗^* and β⃗^*, ^^* =α⃗^* + β⃗^* with α⃗^* ∈(_) and β⃗^* ∈(_). We recall that = () g() ()^. Therefore, we obtain(^*)^^* = (_^α⃗^*)^ G_(_^α⃗^*) + (_^β⃗^*)^ G̅_(_^β⃗^*) ≥ g(_+1) β⃗^*_2^2. In the first step, we used the facts that _^α⃗^* = 0⃗ and _^β⃗^* = 0⃗. The second step follows form the fact that _^β⃗^* _2 = β⃗^* _2. We also have ^= (^^)^g() (^^)≤ g(_) _^_2^2 +g(_) _^^_2^2 ≤ g(_) ^_2^2 +g(_) _^^_2^2≤g(_) _2^2 +g(_) [ _^_2^2 +_^ (^ - )_2^2 ] ≤ g(_) _2^2 +ϵ^2 g(_) _2^2. The second inequality follows from the facts that _k_2 = 1 and =. To obtain the third inequality, we used _2 = 1 and the triangle inequality. For the last step, notice that _^_2 = 0 (as ∈(_)), _k_2 = 1 and use (<ref>). [Bounding the terms in (<ref>) involving ]. By definition of , it is immediate that P( - )_2^2 = P_2^2. For the other term involving , the triangle inequality yields P^* - P_2 ≥P^* - P_2 - P_2. Then, we have P^* - P_2 = P^^* - P_2 =P^^* - P_2 ≥P^^* - P^_2 - P^ - P_2. The first equality follows from ^ = I, the second from (<ref>), and the triangle inequality was used in the last step. To summarise, we are at P^* - P_2 ≥P^^* - P^_2 - P^ - P_2 - P_2. We continue by lower bounding P^^* - P^_2 and upper bounding P^ - P_2 separately.[Lower bound on P^^* - P^_2]. Equality (<ref>) yields P^^* - P^_2 =P^^* - P^_2. Using the triangle inequality and the fact that ∈, we obtainP^^* - P^_2≥P^^* - P_2 - P - P^_2 ≥P^^* - P_2 - P_2 ^ - _2 ≥P^^* - P_2 - ϵM_ max_2. The restricted isometry property (<ref>) then yields P^^* - P_2 = P(α⃗^* - ) + Pβ⃗^*_2 ≥P (α⃗^* - )_2 - Pβ⃗^*_2 ≥√( (1-δ)) α⃗^* - _2 - M_ maxβ⃗^*_2. We used the equality ^^* = α⃗^* + β⃗^*. We thus proved that P^^* - P^_2 ≥√( (1-δ)) α⃗^* - _2 - M_ maxβ⃗^*_2 - ϵ M_ max_2.[Upper bounding P^ - P_2]. We have P^ - P_2 ≤_2 P_2 ^ - _2 ≤ϵM_ max_2. We used the facts that _2 = 1 (see (<ref>)) and that ∈ in the last inequality.Using the inequalities (<ref>) and (<ref>) in (<ref>), we arrive at P^* - P_2 ≥√( (1-δ)) α⃗^* - _2 - M_ maxβ⃗^*_2 - 2ϵ M_ max_2 - P_2.[Finishing the proof]. Remark that (<ref>) implies P(^* - )_2^2≤P( - )_2^2 +^, and(^*)^^* ≤P( - )_2^2 +^. Using (<ref>), (<ref>), (<ref>) in (<ref>), we obtain g(_+1) β⃗^*_2^2 ≤ P_2^2+g(_) _2^2 + ϵ^2g(_) _2^2, which implies (<ref>) in Theorem <ref>. It remains to prove (<ref>) to finish the proof.Using (<ref>), (<ref>) and (<ref>) in (<ref>), we obtain √( (1-δ))α⃗^* - _2 ≤ 2 P_2 + M_ maxβ⃗^*_2 + (√( g(_)) + ϵ√( g(_)) + 2 ϵ M_ max) _2. Using (<ref>) to bound β⃗^*_2 on the right hand side, we have √( (1-δ))α⃗^* - _2 ≤ 2 P_2 + M_ max/√( g(_+1))P_2 + M_ max√(g(_)/g(_+1))_2+ ϵ M_ max√(g(_)/g(_+1))_2 + (√( g(_)) + ϵ√( g(_)) + 2 ϵ M_ max) _2 = ( 2 +M_ max/√( g(_+1)))P_2 + ( M_ max√(g(_)/g(_+1)) + √( g(_))) _2+ ϵ( 2 M_ max + M_ max√(g(_)/g(_+1)) + √( g(_))) _2. We only rearranged the term in the last step. This proves (<ref>) and terminates the proof.IEEEtran imaiai
http://arxiv.org/abs/1705.02202v1
{ "authors": [ "Gilles Puy", "Patrick Pérez" ], "categories": [ "cs.SI", "cs.IT", "math.IT" ], "primary_category": "cs.SI", "published": "20170226215655", "title": "Structured sampling and fast reconstruction of smooth graph signals" }
= 15 cm = 22 cm = 0 cm prethmTheoremthm. preproTheorempro. = 0 cm = -0.5 cm = 2.5 mmprecorCorollarycor.preconjConjectureconj.preremarkRemarkremark.prelemLemmalem.preexampleExampleexa.preproofProof.proof[1]#1 prooff[1]#1 On the intersection graph of ideals of a commutative ring Keywords: Intersection graph, perfect graph, clique number, chromatic number, diameter, girth.   2010 Mathematics Subject Classification: 05C15, 05C17, 05C69, 13A99, 13C99.F. HeydariDepartment ofMathematics, Karaj Branch, Islamic Azad University, Karaj, Iranf-heydari@kiau.ac.ir======================================================================================================================================================================================================================================= Let R be a commutative ring and M be an R-module, and let I(R)^* be the set of all non-trivial ideals of R. The M-intersection graph of ideals of R, denoted by G_M(R), is a graph with the vertex set I(R)^*, and two distinct vertices I and J are adjacent if and only if IM∩ JM≠ 0. For every multiplication R-module M, the diameter and the girth of G_M(R) are determined. Among other results, we prove that if M is a faithful R-module and the clique number of G_M(R) is finite, then R is a semilocal ring. We denote the ℤ_n-intersection graph of ideals of the ring ℤ_m by G_n(ℤ_m), where n,m≥ 2 are integers and ℤ_n is a ℤ_m-module. We determine the values of n and m for which G_n(ℤ_m) is perfect. Furthermore, we derive a sufficient condition for G_n(ℤ_m) to be weakly perfect.§ INTRODUCTIONLet R be a commutative ring, and I(R)^* be the set of all non-trivial ideals of R. There are many papers on assigning a graph to a ring R, for instance see [1–4]. Also the intersection graphs of some algebraic structures such as groups, rings and modules have been studied by several authors, see <cit.>.In <cit.>, the intersection graph of ideals of R, denoted by G(R), was introduced as the graph with vertices I(R)^* and for distinct I,J∈ I(R)^*,the vertices I and J are adjacent if and only if I∩ J≠ 0. Also in <cit.>, the intersection graph of submodules of an R-module M, denoted by G(M), is defined to be the graph whose vertices are the non-trivial submodules of M and two distinct vertices are adjacent if and only if they have non-zero intersection.In this paper, we generalize G(R) to G_M(R), the M-intersection graph of ideals of R, where M is an R-module.Throughout the paper, all rings are commutative with non-zero identity and all modules are unitary. A module is called a uniform module if the intersection of any two non-zero submodules is non-zero. An R-module M is said to be a multiplication module if every submodule of M is of the form IM, for some ideal I of R. The annihilator of M is denoted by ann(M). The module M is called a faithful R-module if ann(M)=0. By a non-trivial submodule of M, we mean a non-zero proper submodule of M. Also, J(R) denotes the Jacobson radical of R and Nil(R) denotes the ideal of all nilpotent elements of R. By Max(R), we denote the set of all maximal ideals of R.A ring having only finitely many maximal ideals is said to be a semilocal ring. As usual, ℤ and ℤ_n will denote the integers and the integers modulo n, respectively.A graph in which any two distinct vertices are adjacent is called a complete graph. We denote the complete graph on n vertices by K_n. A null graph is a graph containing no edges. Let G be a graph. The complement of G is denoted by G. The set of vertices and the set of edges of G are denoted by V(G) and E(G), respectively. A subgraph H of G is said to be an induced subgraph of G if it has exactly the edges that appear in G over V(H). Also, a subgraph H of G is called a spanning subgraph if V(H)=V(G). Suppose that x,y∈ V(G). We denote by deg(x) the degree of a vertex x in G. A regular graph is a graph where each vertex has the same degree. We recall that a walk between x and y is a sequence x=v_0 — v_1 — ⋯ — v_k=y of vertices of G such that for every i with 1≤ i ≤ k, the vertices v_i-1 and v_i are adjacent. A path between x and y is a walk between x and y without repeated vertices. We say that G is connected if there is a path between any two distinctvertices of G. For vertices x and y of G, let d(x,y) be the length of a shortest path from x to y (d(x,x)=0 and d(x,y)=∞ if there is no path between x and y). The diameter of G, diam(G), is the supremum of the set {d(x,y) : xand yare vertices of G}. The girth of G, denoted by gr(G), is the length of a shortest cycle in G (gr(G)=∞ if G contains no cycles). A clique in G is a set of pairwise adjacent vertices and the number of vertices in the largest clique of G, denoted by ω(G), is called the clique number of G. The chromatic number of G, χ(G), is the minimal number of colors which can be assigned to the vertices of G in such a way that every two adjacent vertices have different colors. A graph G is perfect if for every induced subgraph H of G, χ(H)=ω(H). Also, G is called weakly perfect if χ(G)=ω(G).In the next section, we introduce the M-intersection graph of ideals of R, denoted by G_M(R), where R is a commutative ring and M is a non-zero R-module. It is shown that for every multiplication R-module M, diam(G_M(R))∈{0,1,2,∞} and gr(G_M(R))∈{3,∞}. Among other results, we prove that if M is a faithful R-module and ω(G_M(R)) is finite, then |Max(R)|≤ω(G_M(R))+1 and J(R)=Nil(R). In the last section, we consider the ℤ_n-intersection graph of ideals of ℤ_m, denoted by G_n(ℤ_m), where n,m≥ 2 are integers and ℤ_n is a ℤ_m-module. We show that G_n(ℤ_m) is a perfect graph if and only if n has at most four distinct prime divisors. Furthermore, we derive a sufficient condition for G_n(ℤ_m) to be weakly perfect. As a corollary, it is shown that the intersection graph of ideals of ℤ_m is weakly perfect, for every integer m≥ 2.§ THE M-INTERSECTION GRAPH OF IDEALS OF R In this section, we introduce the M-intersection graph of ideals of R and study its basic properties. Definition. Let R be a commutative ring and M be a non-zero R-module. The M-intersection graph of ideals of R, denoted by G_M(R), is the graph with vertices I(R)^* and two distinct vertices I and J are adjacent if and only if IM∩ JM≠ 0.Clearly, if R is regarded as a module over itself, that is, M=R, then the M-intersection graph of ideals of R is exactly the same as the intersection graph of ideals of R. Also, if M and N are two isomorphic R-modules, then G_M(R) is the same as G_N(R). Let R=ℤ_12. Then we have the following graphs. [vstyle=Classic] [x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=4ℤ_12]4 [x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=2ℤ_12]2 [x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=6ℤ_12]6 [x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=3ℤ_12]3 (2,3) (6,3) (2,6) (2,4)[vstyle=Classic] [x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=4ℤ_12]4 [x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=2ℤ_12]2 [x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=6ℤ_12]6 [x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=3ℤ_12]3[vstyle=Classic] [x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=4ℤ_12]4 [x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=2ℤ_12]2 [x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=6ℤ_12]6 [x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=3ℤ_12]3 (2,4) [vstyle=Classic] [x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=4ℤ_12]4 [x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=2ℤ_12]2 [x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=6ℤ_12]6 [x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=3ℤ_12]3 (2,3) (6,3) (2,6) G(ℤ_12)G_ℤ_2(ℤ_12)G_ℤ_3(ℤ_12)G_ℤ_4(ℤ_12)Let n≥ 2 be an integer. If [m_1,m_2] is the least common multiple of two distinct integers m_1,m_2≥ 2, then m_1ℤℤ_n∩ m_2ℤℤ_n=m_1ℤ_n∩ m_2ℤ_n=[m_1,m_2]ℤ_n. Thus m_1ℤ and m_2ℤ are adjacent in G_ℤ_n(ℤ) if and only if n does not divide [m_1,m_2].Let p be a prime number and n,m be two positive integers. If p^n divides m, then mℤ is an isolated vertex of G_ℤ_p^n(ℤ). Therefore, since ℤ_p^n is a uniform ℤ-module, so G_ℤ_p^n(ℤ) is a disjoint union of an infinite complete graph and its complement. Also, ℤ_p^∞ (the quasi-cyclic p-group), is a uniform ℤ-module and ann(ℤ_p^∞)=0. Hence G_ℤ_p^∞(ℤ) is an infinite complete graph. Obviously, if M is a faithful multiplication R-module, then G_M(R) is a complete graph if and only if M is a uniform R-module.Let R be a commutative ring and let M be a non-zero R-module.* If M is a faithful R-module, then G(R) is a spanning subgraph of G_M(R). To see this, suppose that I and J are adjacent vertices of G(R). Then I∩ J≠ 0 implies that (I∩ J)M≠ 0 and so IM∩ JM≠ 0. Therefore I is adjacent to J in G_M(R). * If M is a multiplication R-module, then G(M) is an induced subgraph of G_M(R). Note that for each non-trivial submodule N of M, there is a non-trivial ideal I of R, such that N=IM and so we can assign N to I. Also, N=IM is adjacent to K=JM in G(M) if and only if IM∩ JM≠ 0, that is, if and only if I is adjacent to J in G_M(R). Let R be a commutative ring and let M be a faithful R-module. If G_M(R) is not connected, then M is a direct sum of two R-modules.Suppose that C_1 and C_2 are two distinct components of G_M(R). Let I∈ C_1 and J∈ C_2. Since M is a faithful R-module, so IM∩ JM=0 implies that I⊈ J and J⊈ I. Now if I+J≠ R, then I — I+J — J is a path between I and J, a contradiction. Thus I+J= R and so M=IM⊕ JM. The next theorem shows that for every multiplication R-module M, the diameter of G_M(R) has 4 possibilities.Let R be a commutative ring and M be a multiplication R-module. Then diam(G_M(R))∈{0,1,2,∞}.Assume that G_M(R) is a connected graph with at least two vertices. So M is a faithful module. If there is a non-trivial ideal I of R such that IM=M, then I is adjacent to all other vertices. Hence diam(G_M(R))≤ 2. Otherwise, we claim that G(M) is connected. Let N and K be two distinct vertices of G(M). Since M is a multiplication module, so N=IM and K=JM, for some non-trivial ideals I and J of R. Suppose that I=I_1 — I_2 — ⋯ — I_n=J is a path between I and J in G_M(R). Therefore, N — I_2M — ⋯ — I_n-1M — K is a walk between N and K. Thus, we conclude that there is also a path between N and K in G(M). The claim is proved. So by <cit.>, diam(G(M))≤ 2. Now, suppose that I_1 and I_2 are two distinct vertices of G_M(R). If I_1M∩ I_2M=0, then I_1M and I_2M are two distinct vertices of G(M). Hence there exists a non-trivial submodule N of M which is adjacent to both I_1M and I_2M in G(M). Since M is a multiplication module, so N=JM, for some non-trivial ideal J of R. Thus J is adjacent to both I_1 and I_2 in G_M(R). Therefore diam(G_M(R))≤ 2. Let R be a commutative ring and M be a multiplication R-module. If G_M(R) is a connected regular graph of finite degree, then G_M(R) is a complete graph.Suppose that G_M(R) is a connected regular graph of finite degree. If ann(M)≠ 0, then G_M(R)=K_1. So assume that ann(M)=0. We claim that M is an Artinian module. Suppose to the contrary that M is not an Artinian module. Then there is a descending chain I_1M⊃ I_2M⊃⋯⊃ I_nM⊃⋯ of submodules of M, where I_i's are non-trivial ideals of R. This implies that deg(I_1) is infinite, a contradiction. The claim is proved. Therefore M has at least one minimal submodule. To complete the proof, it suffices to show that M contains a unique minimal submodule. By contrary, suppose that N_1 and N_2 are two distinct minimal submodules of M. Hence N_1=I_1M and N_2=I_2M, where I_1 and I_2 are two non-trivial ideals of R. Since N_1∩ N_2=0, so I_1 and I_2 are not adjacent. By Theorem <ref>, there is a vertex J which is adjacent to both I_1 and I_2. So both I_1M and I_2M are contained in JM. Thus each vertex adjacent to I_1 is adjacent to J too. This implies that deg(J) > deg(I_1), a contradiction. Also, the following theorem shows that for every multiplication R-module M, the girth of G_M(R) has 2 possibilities.Let R be a commutative ring and M be a multiplication R-module. Then gr(G_M(R))∈{3,∞}. Suppose that I_1 — I_2 — ⋯ — I_n — I_1 is a cycle of length n in G_M(R). If n=3, we are done. Thus assume that n≥4. Since I_1M∩ I_2M≠ 0 and M is a multiplication module, we have I_1M∩ I_2M=JM, where J is a non-zero ideal of R. If J is a proper ideal of R and J≠ I_1,I_2, then I_1 — J — I_2 — I_1 is a triangle in G_M(R). Otherwise, we conclude that I_1M⊆ I_2M or I_2M⊆ I_1M. Similarly, we can assume that I_iM⊆ I_i+1M or I_i+1M⊆ I_iM, for every i, 1<i<n. Without loss of generality suppose that I_1M⊆ I_2M. Now, if I_2M⊆ I_3M, then I_1 — I_2 — I_3 — I_1 is a cycle of length 3 in G_M(R). Therefore assume that I_3M⊆ I_2M. Since I_3M⊆ I_4M or I_4M⊆ I_3M, so I_2 — I_3 — I_4 — I_2 is a triangle in G_M(R). Hence if G_M(R) contains a cycle, then gr(G_M(R))=3. Let R be a commutative ring and M be a non-zero R-module. If I is an isolated vertex of G_M(R), then the following hold: * I is a maximal ideal of R or I⊆ ann(M). * If I⊈ ann(M), then I=Ra, for every a∈ I∖ ann(M).(1) There is a maximal ideal 𝔪 of R such that I⊆𝔪. Assume that I≠𝔪. Then we have IM=IM∩𝔪M=0, since I is an isolated vertex. So I⊆ ann(M).(2) Suppose that a∈ I∖ ann(M) and I≠ Ra. Since I is an isolated vertex, we have RaM=IM∩ RaM=0 and so a∈ ann(M), a contradiction. Thus I=Ra.Let R be a commutative ring and M be a faithful R-module. If G_M(R) is a null graph, then it has at most two vertices and R is isomorphic to one of the following rings: * F_1× F_2, where F_1 and F_2 are fields; * F[x]/(x^2), where F is a field; * L, where L is a coefficient ring of characteristic p^2, for some prime number p. By Lemma <ref>, every non-trivial ideal of R is maximal and so by <cit.>, R cannot have more than two different non-trivial ideals. Thus G_M(R) has at most two vertices. Also, by <cit.>, R is isomorphic to one of the mentioned rings.In the next theorem we show that if M is a faithful R-module and ω(G_M(R))<∞, then R is a semilocal ring.Let R be a commutative ring and M be a faithful R-module. If ω(G_M(R)) is finite then |Max(R)|≤ω(G_M(R))+1 and J(R)=Nil(R).First we prove that |Max(R)|≤ω(G_M(R))+1. Let ω=ω(G_M(R)). By contradiction, assume that 𝔪_1,…,𝔪_ω+2 are distinct maximal ideals of R. We know that 𝔪_1⋯𝔪_i≠0, for every i, 1≤ i≤ω+1. Otherwise, 𝔪_1⋯𝔪_j=0, for some j, 1≤ j≤ω+1. So 𝔪_1⋯𝔪_j⊆𝔪_j+1 and hence by Prime Avoidance Theorem <cit.>, we have 𝔪_t⊆𝔪_j+1, for some t, 1≤ t≤ j, which is impossible. This implies that {𝔪_1,𝔪_1𝔪_2,…,𝔪_1⋯𝔪_ω+1} is a clique in G_M(R), a contradiction. Thus |Max(R)|≤ω+1.Now, we prove that J(R)=Nil(R). By contrary, suppose that a∈ J(R)∖ Nil(R). Since Ra^iM∩ Ra^jM≠ 0, for every i,j, i<j and ω(G_M(R)) is finite, we conclude that Ra^t= Ra^s, for some integers t<s. Hence a^t(1-ra^s-t)=0, for some r∈ R. Since a∈ J(R), so 1-ra^s-t is a unit. This yields that a^t=0, a contradiction. The proof is complete.§ THE ℤ_N-INTERSECTION GRAPH OF IDEALS OF ℤ_M Let n,m≥ 2 be two integers and ℤ_n be a ℤ_m-module. In this section we study the ℤ_n-intersection graph of ideals of the ring ℤ_m. Also, we generalize some results given in <cit.>. For abbreviation, we denote G_ℤ_n(ℤ_m) by G_n(ℤ_m). Clearly, ℤ_n is a ℤ_m-module if and only if n divides m. Throughout this section, without loss of generality, we assume that m=p_1^α_1⋯ p_s^α_s and n=p_1^β_1⋯ p_s^β_s, where p_i's are distinct primes, α_i's are positive integers, β_i's are non-negative integers, and 0≤β_i≤α_i for i=1,…,s. Let S={1,…,s} and S'={i∈ S : β _i≠ 0}. The cardinality of S' is denoted by s'. For two integers a and b, we write a|b (a∤ b) if a divides b (a does not divide b).First we have the following remarks.It is easy to see that I(ℤ_m)={dℤ_m : d divides m } and |I(ℤ_m)^*|=∏_i=1^s(α_i+1)-2. Let ℤ_n be a ℤ_m-module. If n|d, then dℤ_m is an isolated vertex of G_n(ℤ_m). Obviously, d_1ℤ_m and d_2ℤ_m are adjacent if and only if n∤ [d_1,d_2]. This implies that G_n(ℤ_m) is a subgraph of G(ℤ_m).Let ℤ_n be a ℤ_m-module and d=p_1^r_1⋯ p_s^r_s(≠ 1,m) be a divisor of m. We set D_d={i∈ S :r_i< β_i }. Clearly, D_d⊆ S'. Suppose that W is a clique of G_n(ℤ_m). Then Γ_W={D_d : dℤ_m∈ W } is an intersecting family of subsets of S'. (A family of sets is intersecting if any two of its sets have a non-empty intersection.) Also, if Γ is an intersecting family of subsets of S' and W_Γ={ dℤ_m : d≠ 1,m,d|m,D_d∈Γ} is non-empty, then W_Γ is a clique of G_n(ℤ_m). (If D is a non-empty subset of S' and Γ={D}, then we will denote W_Γ by W_D.) Thus we have ω(G_n(ℤ_m))=max { |W_Γ|:Γ is an intersecting family of subsets of S' }.Now, we provide a lower bound for the clique number of G_n(ℤ_m). Let ℤ_n be a ℤ_m-module. Then ω(G_n(ℤ_m))≥ max {β_j∏_i≠ j(α_i+1)-1: β_j≠ 0 }. Suppose that β_j≠ 0. With the notations of the previous remark, let Γ={ D⊆ S' : j∈ D }. Then Γ is an intersecting family of subsets of S' and so W_Γ is a clique of G_n(ℤ_m). Clearly, |W_Γ|=β_j∏_i≠ j(α_i+1)-1. Therefore ω(G_n(ℤ_m))≥β_j∏_i≠ j(α_i+1)-1 and hence the result holds.Clearly, if n=p_1^β_1 (β_1>1), then equality holds in the previous theorem. Also, if n has only two distinct prime divisors, that is, s'=2, then again equality holds. So the lower bound is sharp. Let m=n=p_1^2p_2^2p_3^2, where p_1, p_2, p_3 are distinct primes. Thus S'=S={1,2,3} and G_n(ℤ_m)=G(ℤ_m). It is easy to see that |W_{1}|=|W_{2}|=|W_{3}|=2 and |W_{1,2}|=|W_{1,3}|=|W_{2,3}|=4. Also, |W_{1,2,3}|=7. Let Γ_j={ D⊆ S' : j∈ D }, for j=1,2,3. Hence |W_Γ_j|=17, for j=1,2,3. If Γ={{1,2}, {1,3}, {2,3}, {1,2,3}}, then |W_Γ|=19. Therefore ω(G(ℤ_m))=19.By the strong perfect graph theorem, we determine the values of n and m for which G_n(ℤ_m) is a perfect graph.(The Strong Perfect Graph Theorem <cit.>) A finite graph G is perfect if and only if neither G nor G contains an induced odd cycle of length at least 5.Let ℤ_n be a ℤ_m-module. Then G_n(ℤ_m) is perfect if and only if n has at most four distinct prime divisors.First suppose that s'≥5 and n=p_1^β_1⋯ p_s'^β_s', where p_i's are distinct primes and β_i's are positive integers. Let D_1={p_1,p_5}, D_2={p_1,p_2}, D_3={p_2,p_3}, D_4={p_3,p_4}, and D_5={p_4,p_5}. Now, assume that d_iℤ_m∈ W_D_i, for i=1,…,5. Hence d_1ℤ_m — d_2ℤ_m — d_3ℤ_m — d_4ℤ_m — d_5ℤ_m — d_1ℤ_m is an induced cycle of length 5 in G_n(ℤ_m). So by Theorem <ref>, G_n(ℤ_m) is not a perfect graph.Conversely, suppose that G_n(ℤ_m) is not a perfect graph. Then by Theorem <ref>, we have the following cases:Case 1. d_1ℤ_m — d_2ℤ_m — d_3ℤ_m — d_4ℤ_m — d_5ℤ_m — d_1ℤ_m is an induced cycle of length 5 in G_n(ℤ_m). Let D_i=D_d_i, for i=1,…,5. So D_5∩ D_1≠∅ and D_i∩ D_i+1≠∅, for i=1,…,4. Let p_5∈ D_5∩ D_1 and p_i∈ D_i∩ D_i+1, for i=1,…,4. Clearly, p_1,…,p_5 are distinct and thus s'≥ 5.Case 2. d_1ℤ_m — d_2ℤ_m — d_3ℤ_m — d_4ℤ_m — d_5ℤ_m — d_6ℤ_m is an induced path of length 5 in G_n(ℤ_m). Let D_i=D_d_i, for i=1,…,6. So D_i∩ D_i+1≠∅, for i=1,…,5. Let p_i∈ D_i∩ D_i+1, for i=1,…,5. Clearly, p_1,…,p_5 are distinct and hence s'≥ 5.Case 3. There is an induced cycle of length 5 in G_n(ℤ_m). So G_n(ℤ_m) contains an induced cycle of length 5 and by Case 1, we are done.Case 4. d_1ℤ_m — d_2ℤ_m — d_3ℤ_m — d_4ℤ_m — d_5ℤ_m — d_6ℤ_m is an induced path of length 5 in G_n(ℤ_m). Since D_d_1∩ D_d_3≠∅, D_d_1∩ D_d_4≠∅ and D_d_3∩ D_d_4=∅, we may assume that {p_1,p_2}⊆ D_d_1, where p_1∈ D_d_3 and p_2∈ D_d_4, for some distinct p_1,p_2∈ S'. Similarly, we find that {p_3,p_4}⊆ D_d_2, for some distinct p_3,p_4∈ S'∖{p_1,p_2} and also |D_d_3|≥ 2. Now, since D_d_3∩ D_d_2=∅ and p_2∉ D_d_3, we deduce that s'≥ 5. The graph G(ℤ_m) is perfect if and only if m has at most four distinct prime divisors.In the next theorem, we derive a sufficient condition for G_n(ℤ_m) to be weakly perfect.Let ℤ_n be a ℤ_m-module. If α_i≤ 2β_i-1 for each i∈ S', then G_n(ℤ_m) is weakly perfect.Let D be a non-empty subset of S' and D=S'∖ D. As we mentioned in Remark <ref>, if W_D is non-empty, then W_D is a clique of G_n(ℤ_m). Also, the vertices of W_S' (if W_S'≠∅) are adjacent to all non-isolated vertices. Suppose that D_1 and D_2 are two non-empty subsets of S' and D_1⊆ D_2. Since α_i≤ 2β_i-1 for each i∈ S', so ∏_i∈ D_2∖ D_1(α_i-β_i+1)≤∏_i∈ D_2∖ D_1β_i. This implies that ∏_i∈ D_1β_i∏_i∉ D_1(α_i-β_i+1)≤∏_i∈ D_2β_i∏_i∉ D_2(α_i-β_i+1) and hence |W_D_1|≤ |W_D_2|. Let Γ be an intersecting family of subsets of S' and ω(G_n(ℤ_m))=|W_Γ|. Let D⊆ S'. We show that D∈Γ or D∈Γ. Assume that D∉Γ. So there is D_1∈Γ such that D∩ D_1=∅. Thus D_1⊆D and hence D∈Γ. We claim that |W_D|≤ |W_D|, for each D∈Γ. Suppose to the contrary, D∈Γ and |W_D|>|W_D|. If A∈Γ and A⊆ D, then D⊆A. So we have |W_A|≤|W_D|<|W_D|≤|W_A|. Let Φ=Γ∪{A : A∈Γ, A⊆ D}∖{A∈Γ : A⊆ D}. Then Φ is an intersecting family of subsets of S' and |W_Γ|<|W_Φ|, a contradiction. The claim is proved.Now, we show that G_n(ℤ_m) has a proper |W_Γ|-vertex coloring. First we color all vertices of W_Γ with different colors. Next we color each family W_D of vertices out of W_Γ with colors of vertices of W_D. Note that if D∉Γ, then D∈Γ and |W_D|≤ |W_D|. Suppose that d_1ℤ_m and d_2ℤ_m are two adjacent vertices of G_n(ℤ_m). Thus D_d_1∩ D_d_2≠∅. Without loss of generality, one can assume D_d_1≠ D_d_2. So we deduce that D_d_1≠D_d_2 and D_d_1≠D_d_2. Therefore, d_1ℤ_m and d_2ℤ_m have different colors. Thus χ(G_n(ℤ_m))≤ |W_Γ| and hence ω(G_n(ℤ_m))=χ(G_n(ℤ_m))=|W_Γ|.As an immediate consequence of the previous theorem, we have the next result. The graph G(ℤ_m) is weakly perfect, for every integer m≥ 2.In the case that α_i=2β_i-1 for each i∈ S', we determine the exact value of χ(G_n(ℤ_m)). It is exactly the lower bound obtained in the Theorem <ref>. Let ℤ_n be a ℤ_m-module. If α_i=2β_i-1 for each i∈ S', then ω(G_n(ℤ_m))=χ(G_n(ℤ_m))=2^s'-1∏_i∈ S'β_i∏_i∈ S∖ S'(α_i+1)-1. Let D≠∅ be a proper subset of S'. Then |W_D|=∏_i∈ Dβ_i∏_i∉ D(α_i-β_i+1)=∏_i∈ S'β_i∏_i∈ S∖ S'(α_i+1) and hence |W_D|=|W_D|. Also, the vertices of W_S' (if W_S'≠∅) are adjacent to all non-isolated vertices and |W_S'|=∏_i∈ S'β_i∏_i∈ S∖ S'(α_i+1)-1. Clearly if Γ is an intersecting family of subsets of S', then |Γ|≤ 2^s'-1. Moreover, if β_j≠ 0 and Γ_j={ D⊆ S' : j∈ D }, then |Γ_j|=2^s'-1. Thus by Theorem <ref>, ω(G_n(ℤ_m))=χ(G_n(ℤ_m))=|W_Γ_j|=2^s'-1∏_i∈ S'β_i∏_i∈ S∖ S'(α_i+1)-1. Let m=p_1⋯ p_s, where p_i's are distinct primes. Then ω(G(ℤ_m))=χ(G(ℤ_m))=2^s-1-1. We close this article by the following problem.Problem. Let ℤ_n be a ℤ_m-module. Then is it true that G_n(ℤ_m) is a weakly perfect graph?99 akbhey S. Akbari, F. Heydari, The regular graph of a noncommutative ring, Bull. Aust. Math. Soc., 89 (2014), 132–140.criteria S. Akbari, S. Khojasteh, Commutative rings whose cozero-divisor graphs are unicyclic or of bounded degree, Comm. Algebra, 42 (2014), 1594–1605. akbtaval S. Akbari, H. A. Tavallaee, S. Khalashi Ghezelahmad, Intersection graph of submodules of a module, J. Algebra Appl., 11 (2012), Article No. 1250019.and D. F. Anderson, A. Badawi, The total graph of a commutative ring, J. Algebra, 320 (2008), 2706–2719.att M. F. Atiyah, I. G. Macdonald, Introduction to Commutative Algebra, Addison-Wesley Publishing Company, 1969.india I. Chakrabarty, S. Ghosh, T. K. Mukherjee, M. K. Sen, Intersection graphs of ideals of rings, Discrete Math., 309 (2009), 5381–5392.strong M. Chudnovsky, N. Robertson, P. Seymour, R. Thomas, The strong perfect graph theorem, Ann. Math., 164 (2006), 51–229.Cs B. Csákány, G. Pollák, The graph of subgroups of a finite group, Czechoslovak Math. J., 19 (1969), 241–247.nik R. Nikandish, M. J. Nikmehr, The intersection graph of ideals of ℤ_n is weakly perfect, Utilitas Mathematica, to appear.maxring1 F. I. Perticani, Commutative rings in which every proper ideal is maximal, Fund. Math., 71 (1971), 193–198.maxring2 J. Reineke, Commutative rings in which every proper ideal is maximal, Fund. Math., 97 (1977), 229–231.
http://arxiv.org/abs/1702.08525v1
{ "authors": [ "F. Heydari" ], "categories": [ "math.AC" ], "primary_category": "math.AC", "published": "20170227205727", "title": "On the intersection graph of ideals of a commutative ring" }
Convolutional Gated Recurrent Neural Network Incorporating Spatial Features for Audio Tagging Yong Xu      Qiuqiang Kong      Qiang Huang      Wenwu Wang      Mark D. Plumbley Center for Vision, Speech and Sigal Processing University of Surrey, Guildford, UK Email: {yong.xu, q.kong, q.huang, w.wang, m.plumbley}@surrey.ac.uk December 30, 2023 ================================================================================================================================================================================================================================================= Environmental audio tagging is a newly proposed task to predict the presence or absence of a specific audio event in a chunk. Deep neural network (DNN) based methods have been successfully adopted for predicting the audio tags in the domestic audio scene. In this paper, we propose to use a convolutional neural network (CNN) to extract robust features from mel-filter banks (MFBs), spectrograms or even raw waveforms for audio tagging. Gated recurrent unit (GRU) based recurrent neural networks (RNNs) are then cascaded to model the long-term temporal structure of the audio signal. To complement the input information, an auxiliary CNN is designed to learn on the spatial features of stereo recordings. We evaluate our proposed methods on Task 4 (audio tagging) of the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge. Compared with our recent DNN-based method, the proposed structure can reduce the equal error rate (EER) from 0.13 to 0.11 on the development set. The spatial features can further reduce the EER to 0.10. The performance of the end-to-end learning on raw waveforms is also comparable. Finally, on the evaluation set, we get the state-of-the-art performance with 0.12 EER while the performance of the best existing system is 0.15 EER. § INTRODUCTIONAudio tagging (AT) aims at putting one or several tags on a sound clip. The tags are the sound events that occur in the audio clip, for example, “speech", “television", “percussion", “bird singing", and so on. Audio tagging has many applications in areas such as information retrieval <cit.>, sound classification <cit.> and recommendation system <cit.>. Many frequency domain audio features such as mel-frequency cepstrum coefficients (MFCCs) <cit.>, Mel filter banks feature (MFBs) <cit.> and spectrogram <cit.> have been used for speech recognition related tasks <cit.> for many years. However, it is unclear how these features perform on the non-speech audio processing tasks. Recently MFCCs and the MFBs were compared on the audio tagging task <cit.> and the MFBs can get better performance in the framework of deep neural networks. The spectrogram has been suggested to be better than the MFBs in the sound event detection task <cit.>, but has not yet been investigated in the audio tagging task.Besides the frequency domain audio features, processing sound from the raw time domain waveforms, has attracted a lot of attentions recently <cit.>. However, most of this works are for speech recognition related tasks; there are few works investigating raw waveforms for environmental audio analysis. For common signal processing steps, the short time Fourier transform (STFT) is always adopted to transform raw waveforms into frequency domain features using a set of Fourier basis. Recent research <cit.> suggests that the Fourier basis sets may not be optimal and better basis sets can be learned from raw waveforms directly using a large set of audio data. To learn the basis automatically, convolutional neural network (CNN) is applied on the raw waveforms which is similar to CNN processing on the pixels of the image <cit.>. Processing raw waveforms has seen many promising results on speech recognition <cit.> and generating speech and music <cit.>, with less research in non-speech sound processing.Most audio tagging systems <cit.> use mono channel recordings, or simply average the multi-channels as the input signal. However, using this kind of merging strategy disregards the spatial information of the stereo audio. This is likely to decrease recognition accuracy because the intensity and phase of the sound received from different channels are different. For example, kitchen sound and television sound from different directions will have different intensities on different channels, depending on the direction of the sources. Multi-channel signals contain spatial information which could be used to help to distinguish different sound sources. Spatial features have been demonstrated to improve results in scene classification <cit.> and sound event detection <cit.>. However, there is little work using multi-channel information for the audio tagging task. Our main contribution in this paper includes three parts. First, we show experimental results on different features including MFBs and spectrogram as well as the raw waveforms on the audio tagging task of the DCASE 2016 challenge. Second, we propose a convolutional gated recurrent neural network (CGRNN) which is the combination of the CNN and the gated recurrent unit (GRU) to process non-speech sounds. Third, the spatial features are incorporated in the hidden layer to utilize the location information. The work is organized as follows: in Section <ref>, the proposed CGRNN is presented for audio tagging. In section <ref>, the spatial features will be illustrated and incorporated into the proposed method. The experimental setup and results are shown in Section <ref> and Section <ref>. Section <ref> summarizes the work and foresees the future work. § CONVOLUTIONAL GATED RECURRENT NETWORK FOR AUDIO TAGGING Neural networks have several types of structures: the most common one is the deep feed-forward neural network. Another popular structure is the convolutional neural network (CNN), which is widely used in image classification <cit.>. CNNs can extract robust features from pixel-level values for images <cit.> or raw waveforms for speech signals <cit.>. Recurrent neural network is the third structure which is often used for sequence modeling, such as language models <cit.> and speech recognition <cit.>. In this section, we will introduce the convolutional neural network and the recurrent neural network with gated recurrent units. §.§ One dimension convolutional neural networkAudio or speech signals are one dimensional. Fig. <ref> shows the structure of a one-dimension CNN which consists of one convolutional layer and one max-pooling layer. N filters with a fixed size F are convolved with the one dimensional signal to get outputs p_i^t{i=0,⋯,(N-1)}. Given that the dimension of the input features was M, the activation h of the convolutional layer would have (M-F+1) values. The max-pooling size is also (M-F+1) which means each filter will give one output value. This is similar to speech recognition work <cit.> where CNN has been used to extract features from the raw waveform signal. The way of each filter producing one value can also be explained as a global pooling layer which is a structural regularizer that explicitly enforces feature maps to be confidence maps of meaningful feature channels <cit.>. So N activations are obtained as the robust features from the basic features. As for the input feature size M, a short time window, e.g., 32 ms, was fed into the CNN each time. The long-term pattern will be learned by the following recurrent neural network. As for the filter size or kernel size, a large receptive field is set considering that only one convolutional layer is designed in this work. For example, F=400 and M=512 are set in <cit.>. If the input feature was raw waveforms, each filter of the CNN was actually learned as a finite impulse response (FIR) filter <cit.>. If the spectrograms or mel-filter banks were fed into the CNN, the filtering was operated on the frequency domain <cit.> to reduce the frequency variants, such as the same audio pattern but with different pitches. §.§ Gated recurrent unit based RNNRecurrent neural networks have recently shown promising results in speech recognition <cit.>. Fig. <ref> shows the basic idea of the RNN. The current activation h_t is determined by the current input x_t and the previous activation h_t-1. RNN with the capability to learn the long-term pattern is superior to a feed-forward DNN, because a feed-forward DNN is designed that the input contextual features each time are independent. The hidden activations of RNN are formulated as:h_t=φ(W^hx_t+R^hh_t-1+b^h)However, a simple recurrent neural network with the recurrent connection only on the hidden layer is difficult to train due to the well-known vanishing gradient or exploding gradient problems <cit.>. The long short-term memory (LSTM) structure <cit.> was proposed to overcome this problem by introducing input gate, forget gate, output gate and cell state to control the information stream through the time. The fundamental idea of the LSTM is memory cell which maintains its state through time <cit.>. As an alternative structure to the LSTM, the gated recurrent unit (GRU) was proposed in <cit.>. The GRU was demonstrated to be better than LSTM in some tasks <cit.>, and is formulated as follows <cit.>: r_t=δ(W^rx_t+R^rh_t-1+b^r)z_t=δ(W^zx_t+R^zh_t-1+b^z)h̃_t=φ(W^hx_t+r_t⊙(R^hh_t-1)+b^h) h_t=z_t⊙h_t-1+(1-z_t)⊙h̃_t where h_t, r_t and z_t are hidden activations, reset gate values and update gate values at frame t, respectively. The weights applied to the input and recurrent hidden units are denoted as W^* and R^*, respectively. The biases are represented by b^*. The functions δ(·) and φ(·) are the sigmoid and tangent activation functions. Compared to the LSTM, there is no separate memory cell in the GRU. The GRU also does not have an output gate, and combines the input and forget gates into an update gate z_t to balance between the previous activation h_t-1 and the update activation h̃_t shown in Eq. (<ref>). The reset gate r_t can decide whether or not to forget the previous activation (shown in Eq. (<ref>)). ⊙ in Eq. (<ref>) and (<ref>) represents the element-wise multiplication. §.§ Convolutional Gated Recurrent Network for audio taggingFig. <ref> shows the framework of a convolutional gated recurrent neural network for audio tagging. The CNN is regarded as the feature extractor along the short window (e.g., 32ms) from the basic features, e.g., MFBs, spectrograms or raw waveforms. Then the robust features extracted are fed into the GRU-RNN to learn the long-term audio patterns. For the audio tagging task, there is a lot of background noise, and acoustic events may occur repeatedly and randomly along the whole chunk (without knowing the specific frame locations). The CNN can help to extract robust features against the background noise by the max-pooling operation, especially for the raw waveforms. Since the label of the audio tagging task is at the chunk-level rather than the frame-level, a large number of frames of the context were fed into the whole framework. The GRU-RNN can select related information from the long-term context for each audio event. To also utilize the future information, a bi-directional GRU-RNN is designed in this work. Finally the output of GRU-RNN is mapped to the posterior of the target audio events through one feed-forward neural layer, with sigmoid output activation function. This framework is flexible enough to be applied to any kinds of features, especially for raw waveforms. Raw waveforms have lots of values, which leads to a high dimension problem. However the proposed CNN can learn on short windows like the short-time Fourier transform (STFT) process, and the FFT-like basis sets or even mel-like filters can be learned for raw waveforms. Finally, one-layer feed-forward DNN gets the final sequence of GRUs to predict the posterior of tags. Binary cross-entropy is used as the loss function in our work, since it was demonstrated to be better than the mean squared error in <cit.> for labels with zero or one values. The loss can be defined as:E=-∑_n=1^NT_nlogT̂_n+(1-T_n)log(1-T̂_n) T̂_n=(1+exp(-O))^-1where E is the binary cross-entropy, T̂_n and T_n denote the estimated and reference tag vector at sample index n, respectively. The DNN linear output is defined as O before the sigmoid activation function is applied. Adam <cit.> is used as the stochastic optimization method. § SPATIAL FEATURES INCORPORATED FOR AUDIO TAGGINGSpatial features can often offer additional cues to help to solve signal processing problems. Many spatial features can be used for audio tagging, such as interaural phase differences or interaural time differences (IPD or ITD) <cit.>, interaural level differences (ILD) <cit.>. The recordings of the audio tagging task of DCASE 2016 challenge are recorded in the home scenes. There are some TV, child speech, adult speech audio events. The spatial features potentially give additional information to analyze the content of the audio, e.g., recognizing the TV audio event by knowing the specific direction of the TV sound. The IPD and ILD are defined as:ILD(t,k)=20log_10|X_left(t,k)/X_right(t,k)| IPD(t,k)=∠(X_left(t,k)/X_right(t,k))where X_left(t,k) and X_right(t,k) denote the left channel and right channel complex spectrum of the stereo audio. The operator |·| takes the absolute of the complex value, and ∠(·) finds the phase angle. In this work, we also define interaural magnitude differences (IMD) which is similar to the ILD. IMD is defined in linear domain while ILD is defined in logarithmic domain.IMD(t,k)=|X_left(t,k)|-|X_right(t,k)|Fig. <ref> shows the structure of incorporating the spatial features (IMD/ILD/IPD, etc.) using an additional CNN. Then the activations learned from the basic features and the activations learned from the spatial features are concatenated to be fed into the GRU-RNN plotted in Fig. <ref>.The audio files of the audio tagging task of the DCASE 2016 challenge are recorded in a domestic home environment. There are severe reverberation, high-level background noise and multiple acoustic sources. These factors might influence the effectiveness of IPD and ILD. Fig. <ref> shows the spectrogram, IMD, ILD and IMD of one recording from the audio tagging task of DCASE 2016 challenge. The IMD appears to have some meaningful patterns while the ILD and the IPD seem to be random which would lead to the training difficulties of the classifier. From our empirical experiments, IPD and ILD do not appear to help to improve the classifier performance while IMD is beneficial. Similar results were reported in <cit.> where IPD was found not to be helpful for the sound event detection in home scenes but helpful for the event detection of residential areas. This may be because residential areas are open areas with less reverberation than indoor home environments. Hence we will use IMD as our spatial features in this work. The filter size of CNNs learned on the IMD is set the same with the related configuration for the spectrogram.§ DATA SET AND EXPERIMENTAL SETUP§.§ DCASE 2016 audio tagging challenge We conducted our experiments based on the DCASE 2016 audio tagging challenge <cit.>. This audio tagging task consists of the five-fold development set and the evaluation set, which are built based on the CHiME-home dataset <cit.>. The audio recordings were made in a domestic environment <cit.>. The audio data are provided as 4-second chunks at 48kHz sampling rates in stereo mode. We downsampled them into 16kHz sampling rate.For each chunk, three annotators gave three labels, namely multi-label annotations. Then the discrepancies among annotators are reduced by conducting a majority vote. The annotations are based on a set of seven audio events as presented in Table <ref>. A detailed description of the annotation procedure is provided in <cit.>. §.§ Experimental setup In the experiments below, we follow the standard specification of the DCASE 2016 audio tagging task <cit.>, On the development set, we use the official five folds for cross-validation. Table <ref> showsthe number of chunks in the training and test set used for each fold. The number of final evaluation configuration is also listed.The parameters of the networks are tuned based on the heuristic experience. All of the CNNs have 128 filters or feature maps. Following <cit.>, the filter size for MFBs, spectrograms and raw waveforms are 30, 200, and 400, respectively. These parameters can form a large receptive field for each type of basic features considering that only one convolutional layer was designed. The CNN layer is followed by three RNN layers with 128 GRU blocks. One feed-forward layer with 500 ReLU units is finally connected to the 7 sigmoid output units. We pre-process each audio chunk by segmenting them using a 32ms sliding window with a 16ms hop size, and converting each segment into 40-dimension MFBs, 257-dimension spectrograms and 512-dimension raw waveforms. For performance evaluation, we use equal error rate (EER) as the main metric which is also suggested by the DCASE 2016 audio tagging challenge. EER is defined as the point of equal false negative (FN) rate and false positive (FP) rate <cit.>. The source codes for this paper can be downloaded from Github[<https://github.com/yongxuUSTC/cnn_rnn_spatial_audio_tagging>]. §.§ Compared methodsWe compared our methods with the state-of-the-art systems. Lidy-CQT-CNN <cit.> and Cakir-MFCC-CNN <cit.> won the first and the second prize of the DCASE2016 audio tagging challenge <cit.>. They both used convolutional neural networks (CNN) as the classifier. We also compare to our previous method <cit.> which demonstrated the state-of-the-art performance using de-noising auto-encoder (DAE) to learn robust features.§ EXPERIMENTAL RESULTS AND ANALYSIS In this section, the IMD effectiveness will be firstly evaluated on the development set of Task 4 of the DCASE 2016 challenge among the different features, i.e., spectrograms, MFBs and raw waveforms. Then the final evaluation will be presented by comparing with the state-of-the-art methods on the evaluation set of Task 4 of the DCASE 2016 challenge.§.§ The effectiveness of the IMDTable <ref> shows the EER comparisons on seven labels among the spectrogram, the raw waveform and the MFB systems with or without the IMD information, which are evaluated on the development set of the DCASE 2016 audio tagging challenge. Firstly, we can compare the proposed convolutional gated recurrent neural networks on spectrograms, raw waveforms and MFBs. Spectrograms are better than the MFBs perhaps because the spectrogram has more detailed frequency information compared with the MFB. For example, spectrograms are much better than MFBs on child speech (denoted as `c') and female speech (denoted as `f') where a lot of high frequency information exists. The raw waveforms are worse than the spectrograms and the MFBs. One possible reason is that the learned FIR filters are not stable when the whole training set is small (about 3.5 hours of audio in this work). The same explanation was given in <cit.> on the speech recognition task. <cit.> shows that raw waveforms can get better recognition accuracy with the mel-spectra on 2000 hours Google voice search data.With the help of the IMD spatial features, the EER are improved compared to all of the corresponding basic features alone. The raw waveforms with IMD can even get comparable results with the spectrograms and the MFBs. The MFB-IMD combination is slightly better than Spec-IMD, which may be because the IMD is calculated from the left and right spectrograms. The IMD has some common information with the spectrograms which can be seen from Fig. <ref>. However, the IMD is more complementary for the MFBs and the raw waveforms. The previous best performance on the development set of the DCASE 2016 audio tagging challenge was obtained in our recent work using denoising auto-encoder <cit.> with 0.126 EER, but here we get better performance with 0.10 EER. §.§ Overall evaluationsTable <ref> presents the EER comparisons on seven labels among Lidy-CQT-CNN <cit.>, Cakir-MFCC-CNN <cit.>, our previous DAE-DNN <cit.>, and the proposed systems on the spectrogram, the raw waveform and the MFB systems with the IMD information, which are evaluated on the final evaluation set of the DCASE 2016 audio tagging challenge. The de-noising auto-encoder <cit.> was our recent work which can outperform the leading system in the DCASE 2016 audio tagging challenge, namely Lidy-CQT-CNN <cit.>. Our proposed convolutional gated recurrent neural network incorporating the IMD features in this work gives further improved performance. The MFB-IMD obtains the best performance with 0.123 EER which is the state-of-the-art performance on the evaluation set of the DCASE 2016 audio tagging challenge. § CONCLUSION In this paper, we propose a convolutional gated recurrent neural network (CGRNN) to learn on the mel-filter banks (MFBs), the spectrograms and even the raw waveforms. The spatial features, namely the interaural magnitude difference (IMDs), are incorporated into the framework and are demonstrated to be effective to further improve the performance. Spectrogram gives better performance than MFBs without the spatial features. However the MFBs with the IMDs can get the minimal EER, namely 0.102, on the development set of the DCASE 2016 audio tagging challenge. Raw waveforms give comparable performance on the development set. Finally, on the evaluation set of the DCASE 2016 audio tagging challenge, our proposed MFB-IMD system can get the state-of-the-art performance with 0.l23 EER. It is still interesting to further explore why the MFB-IMD system is better than the Spec-IMD system in our future work. In addition, we will also investigate the proposed framework to model raw waveforms on larger training datasets to learn more robust filters. § ACKNOWLEDGMENTThis work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under the grant EP/N014111/1. Qiuqiang Kong is partially supported by the China Scholarship Council (CSC).IEEEtran
http://arxiv.org/abs/1702.07787v1
{ "authors": [ "Yong Xu", "Qiuqiang Kong", "Qiang Huang", "Wenwu Wang", "Mark D. Plumbley" ], "categories": [ "cs.SD", "cs.LG", "cs.NE" ], "primary_category": "cs.SD", "published": "20170224222729", "title": "Convolutional Gated Recurrent Neural Network Incorporating Spatial Features for Audio Tagging" }
^1 IRIF, CNRS, Université Paris Diderot, Paris, France ^2 Lab-STICC, Université Bretagne Sud Learning with Errors is one of the fundamental problems in computational learning theory and has in the last years become the cornerstone of post-quantum cryptography. In this work, we study the quantum sample complexity of Learning with Errors and show that there exists an efficient quantum learning algorithm (with polynomial sample and time complexity) for the Learning with Errors problem where the error distribution is the one used in cryptography. While our quantum learning algorithm does not break the LWE-based encryption schemes proposed in the cryptography literature, it does have some interesting implications for cryptography: first, when building an LWE-based scheme, one needs to be careful about the access to the public-key generation algorithm that is given to the adversary; second, our algorithm shows a possible way for attacking LWE-based encryption by using classical samples to approximate the quantum sample state, since then using our quantum learning algorithm would solve LWE. Finally, we extend our results and show quantum learning algorithms for three related problems: Learning Parity with Noise, Learning with Rounding and Short Integer Solution.^2======[plain] § INTRODUCTIONThe large amount of data arising in the real world, for example through scientific observations, large-scale experiments, internet traffic, social media, etc, makes it necessary to be able to predict some general properties or behaviors of the data from a limited number of samples of the data. In this context, Computational Learning Theory provides rigorous models for learning and studiesthe necessary and sufficient resources, for example, the number of samples or the running time of the learning algorithm. In his seminal work, Valiant <cit.> introduced the model of PAC learning, and since then this model has been extensively studied and has given rise to numerous extensions.In another revolutionary direction, Quantum Computing takes advantage of the quantum nature of small-scale systems as a computational resource. In this field, the main question is to understand what problems can be solved more efficiently in a quantum computer than in classical computers. In the intersection of the two fields, we have Quantum Learning Theory, where we ask if quantum learning algorithms can be more efficient than classical ones. One of course needs to be careful about defining quantum learning and more precisely, what kind of access to the data a quantum learning algorithm has. On one hand, we can just provide classical samples to the quantum learning algorithm that can then use the quantum power in processing these classical data. In the more general scenario, we allow the quantum learning algorithm to receive quantum samples of the data, for a natural notion of a quantum sample as a superposition that corresponds to the classical sample distribution. More precisely, in classical learning, the learning algorithm is provided with samples of (x, f(x)), where x is drawn from some unknown distribution D and f is the function we wish to learn. The goal of the learner in this case is to output a function g such that with high probability (with respect to the samples received), f and g are close, i.e., f(x)g(x) is small when x is drawn from the same distribution D.The extension of this model to the quantum setting is that the samples now are given in the form of a quantum state ∑_x√(D(x))|x⟩|f(x)⟩. Note that one thing the quantum learner can do with this state is simply measure it in the computational basis and get a classical sample from the distribution D. Hence, a quantum sample is at least as powerful as a classical sample. The main question is whether the quantum learner can make better use of these quantum samples and provide an advantage in the number of samples and/or running time compared to a classical learner.In this work we focus on one of the fundamental problems in learning theory, the Learning with Errors (LWE). In LWE,one is given samples of the form(a, a· s + e q)where s ∈^n is fixed, a ∈^n is drawn uniformly at random and e ∈ is an 'error' term drawn from some distribution χ. The goal is to output s, while minimizing the number of samples used and the computation time.First, LWE is the natural generalisation of the well-studied Learning Parity with Noise problem (LPN), which is the case of q=2. Moreover, a lot of attention was drawn to this problem when Regev <cit.> reduced some (expected to be) hard problems involving lattices to LWE. With this reduction, LWE has become the cornerstone of current post-quantum cryptographic schemes.Several cryptographic primitives proposals such as Fully Homomorphic Encryption <cit.>, Oblivious Transfer <cit.>, Identity based encryption <cit.>, and others schemes are based in the hardness of LWE (for a more complete list see Ref. <cit.> and Ref. <cit.>).Classically, Blum et al. <cit.> proposed the first sub-exponential algorithm for this problem, where both sample and time complexities are2^O(n/logn). Then, Arora and Ge <cit.> improved the time complexity for LWE with a learning algorithm that runs in 2^Õ(n^2) time, for some < 1/2, and it uses at least Ω(q^2logq)samples For LPN,Lyubashevsky <cit.> has proposed an algorithm with sample complexity n^1+at the cost of increasing computation time to O(2^n/loglogn).§ THE QUANTUM LEARNING MODEL In this work, we use the model of learning under the uniform distribution where the learner receivessamples according to the uniform distribution and outputs the exact function withhigh probability. In the quantum setting, the learning algorithm is given quantum samples, namely a uniform superposition of the inputs and function values, ∑_x ∈ X1/√(|X|)|x⟩|f(x)⟩. In this work, we are interested in noisy samples, that can be modeled by setting f(x) = g(x) + e(x,r), whereg and e are deterministic functions, x ∈ X and r ∈ R is the randomness necessary to generate the noise. For defining the quantum sample, we start with the superposition1/√(|R|)∑_r ∈ R|r⟩(1/√(|X|)∑_x ∈ X|x⟩|g(x) + e(x,r)⟩),and then the register corresponding to the randomness is traced out. It means that with probability 1/|R| the quantum sample is1/√(|X|)|x⟩|g(x) + e(x,r)⟩,for each possible value r ∈ R.We consider the noise model defined in Bshouty and Jackson <cit.>, whereindependent noise is added for each element in the superposition, in other words, r = (r_1,...,r_|X|) and e(x,r) = e'(r_x). This model is a natural generalisation for quantum samples with noise since it can be seen as a superposition of the classical samples. In contrast, Cross et al. <cit.> proposed a noise function that is independent of x.Although our noise model might require exponentially more resources to implement quantum samples, we show that this does not make the problem intractable. Also, this is the kind of state we would get after solving the index erasure problem. § OUR CONTRIBUTIONSIn this work we study quantum algorithms for solving LWE with quantum samples. Let us be more explicit on the definition of a quantum sample for the LWE problem. We assume that the quantum learning algorithm receives samples in the form 1/√(q^n)∑_a ∈^n|a⟩|a · s + e_a q⟩ ,where e_a are iid random variables from some distribution χ over . As expected, the performance of the learning algorithm, both in the classical and quantum case, is sensitive to the noise model adopted, i.e. to the distribution χ. When LWE is used in cryptographic schemes, the distribution χ has support on a small interval around 0, either uniform or a discrete gaussian. We prove that for such distributions, there exists an efficient quantum learner for LWE.Main Result[informal] For error distributions χ used in cryptographic schemes, and for any η>0, there exists a quantum learning algorithm that solves LWE with probability 1-η using O(n log1/η) samples and running time poly(n,log1/η).Another interesting feature of our quantum learner is that it is conceptually a very simple algorithm based on one of the basic quantum operations, the Quantum Fourier Transform. Such algorithms have even started to be implemented, of course for very small input sizes and for the binary case <cit.>. Nevertheless, as far as quantum algorithms are concerned, our learner is quite feasible from an implementation point of view.The approach to solve the problem is a generalisation of Bernstein-Vazirani algorithm <cit.>: we start with a quantum sample, apply a Quantum Fourier Transform overon each qudit, and then, we measure in the computational basis. Our analysis shows that, when the last qudit is not 0, which happens with high probability, the value of the remaining registers gives s with constant probability. We can then repeat this process so that our algorithm outputs s with high probability.We also use the same technique in quantum learning algorithms for three related problems. First, we generalise the result proposed by Cross et al. <cit.> and Ristè et al. <cit.> for the LPN problem. The main difference with their work is that we start with a quantum sample, i.e. a state where the noise is independent for each element in the superposition. Second, we show how to solve the Learning with Rounding problem, which can be seen as a derandomized version of LWE. Finally, we also propose a quantum learning algorithm for another relevant problem in cryptography, the Short Integer Solution problem. §.§ Related workWe now review some results on quantum algorithms for learning problems. For a more extended introduction, see the survey by Arunachalam and de Wolf <cit.>.The first approach on trying to solve learning problems with quantum samples was proposed byBshouty and Jackson <cit.>, where they prove that DNFs can be learned efficiently, even when the samples are noisy.No such efficient learners are known classically. Despite not presenting it as a learning problem, Bernstein and Vazirani <cit.> show how to learn parity using a single quantum sample, while classically we need a linear number of samples.Some years later, Servedio and Gortler <cit.> showed that classical and quantum sample/query complexity of learning problems are polynomially related, but they showed that for time complexity there exist exponential separations between classical and quantum learning (assuming standard computational hardness assumptions).Then, Ambainis et al. <cit.>, Atici and Servedio <cit.>, and Hunziker et al. <cit.> provided general upper bounds on the query complexity for learning problems that depend on the size of the concept class being learned.On specific problems, Atici and Servedio <cit.> and Belovs <cit.> provided quantum algorithms for learning juntas and Cross et al. <cit.> proposed and implemented quantum algorithms for LPN in a different noise model.Recently, Arunachalam and de Wolf <cit.> proved optimal bounds for the quantum sample complexity of the Quantum PAC model. §.§ Relation to LWE-based cryptography As we have mentioned, LWE is used in cryptography for many different tasks. Let us briefly describe how one can build an encryption scheme based on LWE <cit.>. The key generation algorithm produces a secret key s ∈, while the public key consists of a sequence of classical LWE samples (a_1, a_1 · s + e_1 q), ..., (a_m, a_m · s + e_mq ), where the error comes from a distribution with support in a small interval around 0. For the encryption of a bit b, the party picks a subset S of [m] uniformly at random and outputs (∑_i ∈ S a_iq , b⌈q/2⌉ +∑_i ∈ S a_i · s + e_iq).For the decryption, knowing s allows one to find b. The security analysis of the encryption scheme postulates that if an adversary can break the encryption efficiently then he is also able to solve the LWE problem efficiently. The algorithm we present here does not break the above LWE-based encryption scheme. Nevertheless, it has interesting implications for cryptography.First, our algorithm shows a possible way for attacking LWE-based encryption: use classical samples to approximate the quantum sample state, and then use our algorithm to solve LWE. One potential way for this would be to start with m classical samples and create the following superposition∑_S ⊆ [m]|S⟩|∑_i ∈ S a_iq⟩|∑_i ∈ S a_i · s + e_iq⟩.This operation is in fact efficient. Then, in order to approximate the quantum sample state, one would need to 'forget' the first register that contains the index information about which subset of the m classical samples we took. In the most general case, such an operation of forgetting the index of the states in a quantum superposition, known as index-erasure (see Aharonov and Ta-Shma <cit.> and Ambainis et al. <cit.>), is exponentially hard, and a number of problems, such as Graph Non-isomorphism, would have an efficient quantum algorithm, if we could do it efficiently. Nevertheless, one may try to use the extra structure of the LWE problem to find sub-exponential algorithms for this case.A second concern that our algorithm raises is thatwhen building an LWE-based scheme, one needs to be careful on the access to the public-key generation algorithm that is given to the adversary. It is well-known that for example, even in the classical case, if the adversary can ask classical queries to the LWE oracle, then he can easily break the scheme: by asking the same query many times one can basically average out the noise and find the secret s. However, if we just assume that the public key is given as a box that an agent has passive access to it, in the sense that he can request a random sample and receive one, then the encryption scheme is secure classically as long as LWE is difficult. However, imagine that the random sample from LWE is provided by a device that creates a superposition1/√(q^n)∑_a ∈^n|a⟩|a · s + e_a q⟩ and then measures it.Then a quantum adversary that has access to this quantum state can break the scheme. Again, our claim is, by no means, that our algorithm breaks the proposed LWE-based encryption schemes, but more that LWE-based schemes which are secure classically (assuming the hardness of LWE) may stop being secure against quantum adversaries if the access to the public key generation algorithm becomes also quantum. A similar situation has also appeared in the symmetric key cryptography with the so calledsuperposition attacks <cit.>. There, the attacker has the ability to query the encryption oracle in superposition, and in this way, she can in fact break many schemes that are assumed to be secure classically.While in the case of symmetric cryptography, the attacker must have quantum access to the encryption oracle in order to break the system, our results show that in the case of LWE-based public-key encryption, the attacker must have quantum access to the public key generation algorithm. § ALGORITHM FOR LWEIn this section we present the extension of the Bernstein-Vazirani algorithm for higher order fields and analyse its behaviour with LWE samples.We show now the Field Bernstein-Vazirani algorithm <cit.> and its main component is theQuantum Fourier Transform over , QFT|j⟩ = 1/√(q^n)∑_k = 0^q^n - 1ω^jk|k⟩.[1ex]0.450.5ptField Bernstein-Vazirani algorithmInput: |ψ⟩∈ (^2)^⊗ n+1Output: s̃∈^n ∪{⊥}Apply QFT^⊗ n+1on |ψ⟩.Measure in the computational basisLet |j⟩|j^*⟩ be the outputIf j^*0, return -(j^*)^-1j qElse, return ⊥[1ex]0.450.5ptFor warming-up, we show the behaviour of Field Bernstein-Vazirani for learning linear functions without noise in <Ref> and then in <Ref> we analyse it for LWE samples.§.§ Quantum algorithm for learning a linear function without error If the input |ψ⟩ is a noiseless quantum sample of a linear function,namely|ψ⟩ = 1/√(q^n)∑_a ∈^n|a⟩|a · s q⟩,Then the Field-Bernstein Vazirani outputs the correct value with probability q-1/q: after applying the QFT on each qudit of <ref>,we get the state1/q^n+1/2∑_a,j ∈^n∑_j^*∈ω^a · (j + j^*s)|j⟩|j^*⟩.It is not hard to see that the probability that for all i ∈ [n], we have j = -j^*s q and j^*0 is1/q^n+∑_j^* ∈^*∑_a ∈^nω^0|-j^*s q⟩^2 =q-1/q. Therefore, if j^*0, we can retrieve s by outputting -(j^*)^-1 j_i (all operations mod q).§.§ Analysis of the algorithm for noisy samples In this section we show that the Field Bernstein-Vazirani algorithm works even if the input is noisy.Instead of the superposition of all elements in ^n, we prove our result here for a more general case where the quantum sample has the form|ψ⟩ = 1/√(v)∑_a ∈ V|a⟩|a· s + e_a (mod q)⟩, where v ∈ [q^n] is a fixed value, V be a random subset of ^n of size v and e_a is a random noise. In this case, for every quantum sample, a new subset V of size v is picked independently at random.Fix v ∈ [q^n].Let V ⊆^n be a random subset of ^n such that |V| = v, and let |ψ⟩ = 1/√(v)∑_a ∈ V|a⟩|a· s + e_a (mod q)⟩, where the e_a are random variables with absolute value at most k. The Field Bernstein-Vazirani(|ψ⟩) outputs s with probability v/20kq^n.If we apply QFT on the state |ψ⟩, we have1/√(q^n+1v)∑_a ∈ V∑_j∈^n, j^* ∈ω^e_a j^* + a · (j + j^*s)|j⟩|j^*⟩. From the last equation, we have that the probability that j = -j^*s q and j^*0 is: 1/q^n+1v∑_a ∈ V∑_j^* ∈^*ω^e_a j^*|-j^*s q⟩|j^*⟩^2= 1/q^n+1v∑_j^* ∈^*( ∑_a ∈ Vℜ(ω^e_a j^*))^2 + ( ∑_a ∈ Vℑ(ω^e_a j^*))^2 ≥1/q^n+1v(∑_a ∈ Vℜ(ω^e_a j^*))^2 ≥γ vcos(2πγ)^2/kq^n.where γ∈ (0, 1/4) ℜ(z) and ℑ(z) are the real and imaginary part of z, respectively. For the first inequality,we have removed some positive quantities, and the last inequality follows from the fact that ℜ(ω^e_a j^*) ≤cos(2πγ) for j^* ≤ ande_a≤. The result follows by maximizing the quantity over all γ∈ (0, 1/4).We now propose an algorithm that tests a candidate solution. [1ex]0.450.5ptTest CandidateInput: s̃∈^n, M ∈^+Output: Accept/rejectRepeat M times. Pick sample|ψ⟩ =1/√(v)∑_a ∈ V|a⟩|a · s + e_aq⟩. Measure the sample in the computational basisLet (a', a'· s + e_a') be the output If |a'· s + e_a' - a'·s̃| > k, rejectAccept[1ex]0.450.5ptFor s̃ = s, Test Candidate(s̃, M) accepts with probability 1,while fors̃ s, Test Candidate(s̃, M) acceptswith probability probability at most (2k+1/q)^M.Since|a'· s + e_a' - a'· s| = |e_a'| ≤ k by the noisedistribution, it follows that thetest passes with probability 1 when s̃ = s.For a value a' picked uniformly random from^n, it follows that a' · (s - s̃) + e_a' q is uniformly distributed over _q if s̃≠ s. Therefore, the probability thatit lies in the interval [-k, k] is 2k+1/q. Since the probability is independent for every iteration, the probability that s̃ is accepted on M iterations is(2k+1/q)^M. In <Ref>, we show how to use the previous algorithms to achieve the following theorem.For dimension n, let q be a prime in the interval [2^n^γ, 2· 2^n^γ).Let |ψ⟩ = 1/√(q^n)∑_a ∈^n|a⟩|a · s + e_a⟩, where the e_a are random variables drawn from a noise distribution with noise magnitude at mostk = poly(n). There is an algorithm that outputs s with probability 1 - η with sample complexity O(klog1/η) and running time poly(n, log1/η).We show then how to extend the result to related problems in <Ref>.§ OPEN PROBLEMS§.§ Generalizing from linear functionsLearning linear functions can be seen as finding a hidden subgroup H = a | a · s = 0 of _q^n. Efficient algorithms for general Abelian Hidden Subgroup Problem are known <cit.><cit.>, and we leave as an open question if these algorithms are also tolerant to noise. §.§ LWE over rings Due to technical reasons regarding the representation of polynomials in Ring-LWE instances (see<Ref> for more details), our LWE algorithm cannot be used to solve Ring-LWE with quantum samples and we leave this question as an open problem.§ ACKNOWLEDGMENTS AG and IK thank Ronald de Wolf for helpful discussions. AG thanks also Lucas Boczkowski, Brieuc Guinard, François Le Gall and Alexandre Nolin for helpful discussions. Supported by ERC QCC andFrench Programme d’Investissement d’Avenir RISQ P141580.[references] [plain]§ NOTATIONFor n ∈ℕ, we define[n] := {1, ..., n}. For a complex number x = a + ib, a,b ∈, we define its norm |x| by √(a^2 + b^2), its real part ℜ(x) = a and its imaginary part ℑ(x) = b. We denote ω as the q-th root of unity, where q will be clear by the context. For a fieldand element a ∈, we denote |a| as the unique value b ∈[-(q-1)/2, q-1/2] such that b ≡ a q. We remind now the notation for quantum information and computation. For readers not familiar with these concepts we refer Ref. <cit.>.Let {e_i} be the standard basis for the q-dimensional Hilbert space ℂ^q. We denote here |i⟩ = e_i and a q-dimensional qudit is a unit vector in this space, i.e. |ψ⟩ = ∑_i ∈α_i|i⟩, for α_i ∈ℂ and ∑_i ∈α_i^2 = 1. We call the state a qubit when q = 2. A k-qudit quantum state is a unit vector in the complex Hilbert space ^q^k and we shorthand the basis states for this space |i_1⟩⊗...⊗|i_k⟩ with |i_1⟩...|i_k⟩.§ AN EFFICIENT QUANTUM LEARNING ALGORITHM FOR LWEIn this section we show how to use <Ref> in order to solve solve LWEwith quantum samples using noise distributions proposed in Brakerski and Vaikuntanathan <cit.>, proving <Ref>. There, the field order q is sub-exponential in the dimension n, generally in [2^n^γ,2· 2^n^γ) for some constant γ∈ (0,1), while the noise distribution χ produces samples with magnitude at most polynomial in n (for instance linear). [1ex]0.450.5ptLWE Algorithm(L, M)Input: L, M ∈^+Output: s̃∈^n ∪{⊥}Repeat L times:Pick a quantum sample |ψ⟩Run the Field Bernstein-Vazirani(|ψ⟩) to get output s̃ Run Test Candidate(s̃, M)If s̃ passes the test, return s̃Return ⊥.[1ex]0.450.5ptLWE Algorithm(L, M) outputs s with probability1 - (1-v/20kq^n)^L - (3k/q)^M L.LWE Algorithm(L, M) does not output s if eitherTest Candidate(s̃, log1/η)accepts some s̃ s before an iteration where Field Bernstein-Vazirani outputss, or LWE Algorithm outputs ⊥. We can upper bound the probability of this event by the probabilitythat at least one of L independent calls to Test Candidate(s̃, log1/η) accepts some s̃ s or that L independent calls to Field Bernstein-Vazirani do not output s. From <Ref> and using the union bound, the probability that at least one of L independent calls to Test Candidate(s̃, log1/η) acceptssome s̃ sis at most(2k+1/q)^M L ≤(3k/q)^M L. From <Ref>, the probability that s is not the output of L independent calls to Field Bernstein-Vazirani is at most(1-v/20kq^n)^L. By union bound, LWE Algorithm(L, M) does not output s with probability at most(1-v/20kq^n)^L + (3k/q)^M L.<Ref> follows directly from <Ref> and picking v = q^n,L = 20k ln1/η and M = 1.§ QUANTUM LEARNING COMPLEXITY OF RELATED PROBLEMS In this section we present learning algorithms for problems that are related to LWE. §.§ Learning parity with noiseWe show here our result for Learning Parity with Noise (LPN) problem, which is the LWE problem for q=2. Here, the parity bit is flipped independently for each element in the superposition with probability η. This is the same noise model proposed by Bshouty and Jackson <cit.>. Note that Cross et al. <cit.> studied LPN with different noise models. In the first, all parities in the superposition are flipped at the same time with probability η. In the second one, each qubit passed through a depolarising channel. Our algorithm and analysis also works for both of the noise models proposed by Cross et al. <cit.>.The algorithm is the same as in the previous section, where now the QFT is over(also called the Hadamard Transform H). Let1/√(2^n)∑_a ∈{0,1}^n|a⟩|a · s+ e_a 2⟩ be a quantum sample where e_a are iid random variables with value 0 with probability 1-η and 1 with probability η.For every constant 0 < δ < 1, applying a Hadamard transform on all qubits and measuring them in the computationalbasis, provides an outcome |j⟩|j^*⟩, where j ∈{0,1}^n and j^* ∈{0,1} such that with probability exponentially close to 1, j = s≥1/2 (1-δ)^2(1-2η)^2. If we apply Hadamards on each qubit of the sample state, we have1/2^n+1/2∑_a ∈{0,1}^n∑_j∈{0,1}^n+1(-1)^e_aj^* + a ·(j + j^*s)|j⟩|j^*⟩We now calculate the probabilitythat j^* = 1 and the first qubits are in the state |s⟩:1/2^n+∑_a ∈{0,1}^n(-1)^e_a+ a · (s + s)|s⟩^2 = 1/2^2n+1(∑_a ∈{0,1}^n(-1)^e_a)^2 From the distribution of each e_a, we have that(-1)^e_a is 1 w.p. 1-η and -1 w.p. η, independently. Therefore (-1)^e_a = 1-2η and using Hoeffding's boundwe have that∑_a ∈{0,1}^n (-1)^e_a≤ (1-δ)(1-2η)2^n<e^δ^2 (1-2η)^2 2^2n/4 Therefore, with probability exponentially close to 1, the probability that j = s is at least1/2^2n+1 ((1-δ)(1-2η)2^n)^2 =1/2(1-δ)^2(1-2η)^2.We can test a candidate solution s̃ in the lines of <Ref> and then repeat the process a linear number of times, and in this case the algorithm can find the right s with probability exponentially close to 1.§.§ LWE over ringsThe Ring-LWE problem<cit.>, a variant of LWE over the ring of polynomials, has been proposed in order to improve the performance of cryptographic constructions using LWE, at the cost of needing stronger assumptions for proving its hardness.The Ring-LWE problem uses the structure of the ring ℛ_q = ℛqℛ for a prime q, ℛ = ℤ[x]f(x) and a cyclotomic polynomial f(x).As in LWE, a Ring-LWE sample is the pair (a, as + e q) for random s,a ∈ℛ_q and e is picked according to some error distribution χ.Unfortunately, our algorithm cannot be used to solve Ring-LWE with the noise model proposed by Bshouty et al. <cit.>, due to technical issues on representing the polynomials. In order to use the quantum learning algorithm for LWE, we need to find an isomorphism ϕ from ℛ_q to ℤ_q^n, where n = φ(m) is the number of invertible elements modulo m. With this isomorphism, we can consider a sample (a, as+e) ∈ℛ_q^2 as two vectors in ℤ_q^n, and a superposition of quantum states representing these vectors can be written as:|ψ⟩ = 1/√(q^n)∑_a ∈ℛ_q|ϕ(a)⟩|ϕ(as + e_a)⟩,and applying the QFT over every register of this state results in QFT^⊗ 2n|ψ⟩= 1/√(q^3n)∑_a ∈ℛ_q∑_x,y ∈ℤ_qω^ϕ(a) · x|x⟩⊗ω^ϕ(as + e_a) · y|y⟩= 1/√(q^3n)∑_a ∈ℛ_q∑_x,y ∈ℤ_qω^ϕ(a)·(x+yϕ(s)) + y·ϕ(e_a)|x⟩|y⟩,where the second equality holds because ϕ is a homomorphism.We consider two ways of representing elements in ℛ_q as integer vectors. The first one consists of identifying a polynomial in ℛ_q with the vector containing its coefficients. However, thiscoefficient embedding is not a homomorphism to _q^n, and the following identity, used in <ref>, does not holdϕ(a)· x + ϕ(a· s + e_a)y = ϕ(a)(x + yϕ(s)) + y ϕ(e_a). Therefore, this representation of polynomials cannot be used within our learning algorithm.The second way of representing a polynomial is through the map ϕ(p(x)) = (p(ω_m), …, p(ω_m^m-1)),whereω_m ∈ℤ_q be a primitive m-th root of unity. This map is particularly interesting since multiplicationℤ_q^n is done component-wise<cit.> and therefore it can be used in implementations of Ring-LWE with efficient multiplication<cit.>. However, in these constructions, the error is sampled from a distribution over polynomials with small coefficients and when after applying the isomorphism, ϕ(e_a) can be arbitrarily large in ℤ_q^n which cannot be handled by our algorithm if the error is independent for each element in the superposition. Finally, we show now how to do solve Ring-LWE for the error model presented in Cross et al. <cit.>, namely, the noise is the same for all elements in the superposition.Let ϕ be any isomorphism from ℛ_q to ℤ_q^n. We can map the original quantum sample using ϕ resulting in 1/√(q^n)∑_a ∈ℛ_q|ϕ(a)⟩⊗|ϕ(as + e)⟩.and using the Field Bernstein-Vazirani algorithm on this state we haveQFT^⊗ 2n|ψ⟩= 1/√(q^3n)∑_a ∈ℛ_q∑_x,y ∈ℤ_qω^ϕ(a) · x|x⟩⊗ω^ϕ(as + e) · y|y⟩= 1/√(q^3n)∑_y∈_qω^y·ϕ(e)∑_a ∈ℛ_q∑_x ∈ℤ_qω^ϕ(a)·(x+yϕ(s)))|x⟩|y⟩.By measuring the last register, the error becomes a global phase and we are able to retrieve s as shown in <Ref>. §.§ Learning with RoundingLWE has been used in the construction of several cryptographic primitives. However, its usage sometimes is limited. For instance, in the implementation of pseudo-random functions, the output must use little or no randomness, which does not correspond to the inherent randomness in LWE's input.For this purpose, Banerjee, Peikert and Rosen<cit.> proposed a derandomized version of LWE called Learning with Rounding (LWR), which does not compromise hardness. LWR has been used in the construction of pseudo-random functions <cit.> and deterministic public key encryption <cit.>.The main idea of LWR consists in replacing a · s + e_a by the 'rounding' of a· swith respect to some modulus p≪ q, which can be seen as a “deterministic noise”. More precisely, the rounding function is defined as follows:⌊·⌉_p : ℤ_q→ℤ_p,with⌊ x ⌉_p= ⌊p/qx ⌉p.An LWR sample is then given by (a, ⌊ a · s ⌉_p) for some a sampled from the uniform distribution on ^n. Let |ψ⟩ = 1/√(q^n)∑_a ∈^n|a⟩|⌊ a · s ⌉_p⟩, be a quantum LWR sample. Let |ϕ⟩ be the state when we multiply the last register of |ψ⟩ with q/p. The Field Bernstein-Vazirani(|ϕ⟩) outputs s with probabilityat least p/12(q-1). For a fixed a,we have thatq/p⌊ a · s ⌉_p= a· s + (q/p⌊ a · s ⌉_p - a· s)q .Since -q/2p≤q/p⌊ a · s ⌉_p - a· s ≤q/2pq,the result follows by<Ref> fork = q/2p. §.§ Quantum Samples for SIS problem We present in this section a learning algorithm for another relevant problem in cryptography, the Short Integer Solution problem.As the name indicates, the Short Integer Solution problem (SIS) consists in finding a short integer solution for asystem of linear equations, and we present now its formal definition. Given a random matrix A ∈^m × n, a random vector z ∈^m, the SIS_n,m,q,β problem is to find a vector x ∈^n such that Ax = z q with x < β.As in the LWE case, the hardness of SIS is also proved through the reduction of (expected to be) hard lattice problems <cit.><cit.><cit.><cit.>. We remark that if we drop either the constraint of having an integer solution or having a short solution, the problem can be easily solved using Gaussian Elimination. The SIS problem and its variants have been used to prove security of constructions of signature schemes <cit.><cit.>, and hash functions <cit.>. In these schemes, samples in the form (A, Av) are public, where v is a small random vector and A is a random matrix.Inspired in the LWE case,we can define a quantum sample for SIS problem as|ψ⟩ = 1/√(q^nm)∑_A ∈^m× n|A⟩|Av⟩q,and we are interested in the sample complexity of finding the (fixed) short solution v. Using Field Bernstein-Vazirani brings the same problem of Gaussian Elimination: there is no guarantee of finding a short solution instead of an arbitrary one.We notice that tracing out m-1 rows of A and the corresponding positions of Av, we remain with1/√(q^n)∑_a ∈^n|a⟩|a· v⟩,and we show an algorithm that works even for this type of quantum sample.The algorithm consists by testing all possible values j ∈{-k,...,k} of -v_i. The test on j = -v_i passes with probability 1, while the test rejects with constant probability for j-v_i. By repeating the test L times, the probability of finding the correct value is amplified.[1ex]0.450.5pt  SIS Algorithm(L)Input: L ∈^+Output: ṽ∈^nFor i ∈ [n] do:For j ∈{-k,...,k} do: For l ∈ [L]:Pick a quantum sample 1/√(q^n)∑_a ∈^n|a⟩|a· v⟩. Add ja_i to the last registerApply QFT on the i-th qudit of a and measure itTest next value of j if outcome is not |0⟩ Set ṽ_i = -j and continue with the next value of i.Output ṽ[1ex]0.450.5pt Let v ∈^n whose coefficients are all smaller in absolute value than some bound k. Given the quantum samples in the form|ψ⟩ = ∑_a ∈^n|a⟩|a· v⟩,SIS Algorithm(L) outputs v with probability1-2km/q^L. We start by doing the analysis of SIS algorithm for i = 1. After adding ja_1 to the last register of the quantum sample, we have1/√(q^n)∑_a ∈^n|a⟩|a· v + a_1j⟩= 1/√(q^n)∑_a_1 ∈∑_a∈^n-1|a_1⟩|a⟩|a_1 (v_1+j) + a·v⟩. If j = -v_1, then the previous state is the product state1/√(q)∑_a_1 ∈|a_1⟩⊗1/√(q^n-1)∑_a∈^n-1|a⟩|a·v⟩.and since QFT∑_a_1 ∈|a_1⟩ = |0⟩, the test passes for all l ∈ [L].On the other hand,if j-v_1, then the state is entangled, and the reduced density matrix of the first register is1/q∑_a_1 ∈a_1.In this case, after applying the QFT on the first register and measuring it, the output is |0⟩ with probability 1/q. Therefore,we have ṽ_1 = -j if for all L independent samples themeasurement outcome after the QFT is |0⟩, and this happens with probability 1/q^L. By the union bound, the probability that the test passes for any value j-v_i is at most 2k/q^L.Finally, the previous analysis holds for every i ∈ [n]. Sincev ṽ iff there exists an i ∈ [n] such that ṽ_iv_i, we can use union bound again to show thatthis happens with probabilityat most 2km/q^L. By picking L = max{1, log2km/η/log q}, the algorithm outputs the correct v with probability at least 1 - η.[references]
http://arxiv.org/abs/1702.08255v2
{ "authors": [ "Alex B. Grilo", "Iordanis Kerenidis", "Timo Zijlstra" ], "categories": [ "quant-ph", "cs.CC" ], "primary_category": "quant-ph", "published": "20170227122107", "title": "Learning with Errors is easy with quantum samples" }
1) Quantum Optoelectronics Laboratory, School of Physical Science and Technology, Southwest Jiaotong University, Chengdu, 610031, China2) National Institute of Standards and Technology, Boulder, CO 80305, USA[Contribution of the U.S. government, not subject to copyright]3) State Key Laboratory of Optoelectronic Materials and Technologies, School of Physics, Sun Yat-Sen University, Guangzhou 510275, ChinaWe demonstrate photon counting at 1550 nm wavelength using microwave kinetic inductance detectors (MKIDs) made from TiN/Ti/TiN trilayer films with superconducting transition temperature T_c≈ 1.4 K. The detectors have a lumped-element design with a large interdigitated capacitor covered by aluminum and inductive photon absorbers whose volume ranges from 0.4 μm^3 to 20 μm^3. The energy resolution improves as the absorber volume is reduced. We achieved an energy resolution of 0.22 eV and resolved up to 7 photons per optical pulse, both greatly improved from previously reported results at 1550 nm wavelength using MKIDs. Further improvements are possible by optimizing the optical coupling to maximize photon absorption into the inductive absorber. Counting Near Infrared Photons with Microwave Kinetic Inductance Detectors J. Gao^2 December 30, 2023 ==========================================================================Photon-number-resolving (PNR) detectors at near infrared wavelengths have important applications in a number of frontier fields, such as quantum secure communications <cit.>, linear optical quantum computing <cit.> and optical quantum metrology <cit.>. Compared to more conventional detectors at this wavelength, such as silicon-based detectors <cit.>, superconducting detectors have lower dark-count rate, higher sensitivity, and broadband response. They show great promise in serving as the basic building blocks for efficient PNR devices. For example, by spatial or temporal multiplexing of superconducting nanowire single-photon detectors (SNSPDs) <cit.>, photons can be counted at high speed. But the single-element nanowire has no intrinsic PNR and energy-resolving capabilities. Alternatively, single-element transition edge sensors (TESs) <cit.> have demonstrated high quantum efficiency and multi-photon discrimination at telecommunication wavelengths <cit.>. Recently, counting up to 29 photons and intrinsic energy resolution ≈ 0.11 eV at 1550 nm wavelength have been achieved in Ti/Au TESs <cit.>.Another type of superconducting detector possessing intrinsic photon-number-resolving and energy-resolving power is the microwave kinetic inductance detector (MKID) <cit.>. MKIDs are cooper pair breaking detectors based on high-quality factor (high-Q) superconducting resonators <cit.>. The absorption of a photon with energy higher than twice the gap energy (hν>2Δ) can break Cooper pairs into quasiparticles, changing the surface impedance of the resonator and resulting in a lower resonance frequency f_r and higher internal dissipation (or lower quality factor Q_i). When applying a short optical pulse to the detector and probing the resonator with a microwave tone near the resonance frequency, one can obtain a pulse response in thecomplex forward transmission S_21, as shown in Fig. 1(a). This photon response can be measured using a homodyne detection scheme (Fig. 1(d)) and the signal can be decomposed into frequency and dissipation responses (Fig. 1(a),(b)) for pulse analysis. Compared to TESs, MKIDs are easy to fabricate and multiplex into large arrays. A large array of MKIDs can be measured using a pair of coaxial cables, which greatly reduces the complexity of the instrument design. Previously, MKIDs with PNR capability have mostly been considered for astronomy applications at the visible wavelength <cit.>. Single-photon counting at telecommunication wavelengths (near infrared) with titanium-nitride (TiN) MKIDs was first demonstrated in Ref. <cit.>, where a full-width-at-half-maximum (FWHM) energy resolution Δ E ≈ 0.4 eV was achieved and up to 2-photon events were resolved. In this letter, we present an optimized MKID design based on TiN/Ti/TiN trilayer films and improved photon counting performance at 1550 nm wavelength: energy resolution Δ E ≈ 0.22 eV is obtained and up to 7-photon events can be resolved. Our detectors are made from a 20 nm thick TiN/Ti/TiN trilayer film <cit.> (T_c≈ 1.4 K) deposited on a high-resistivity Si substrate. Such TiN trilayer films were initially developed for feedhorn-coupled MKIDs which have recently demonstrated photon-noise limited sensitivity at submillimeter wavelengths <cit.>. As shown in Fig. 1(c), our detectors comprise a large IDC shunted by a meandered inductive strip.The latter serves as a sensitive photon absorber. The IDC area is ≈ 0.7 mm × 0.7 mm, with 5 μm finger/gap width. This large area IDC is used to suppress the two-level system (TLS) noise in the substrate <cit.>. The IDC is covered with a 100 nm-thick layer of aluminum (Al). Because of the low current density in the IDC and the much lower kinetic inductance of Al than TiN, the response from a photon hitting the IDC area is negligible. We designed 13 resonators on a 10 mm by 5 mm chip, with inductor strip width ranging from 1 μm to 20 μm, length from 10 μm to 100 μm and volume from 0.4 μm^3 to 20 μm^3, to systematically study the dependence of the detector performance on the absorber geometry. All the resonance frequencies are designed to be around 6 GHz and all the resonators are coupled to a common microstrip feedline with coupling quality factor Q_c≈ 1.5 × 10^4.The detectors are cooled in a dilution refrigerator to a base temperature of 40 mK. At this temperature, the internal quality factors of the resonators are measured to be around 10^5. A 1550 nm laser diode driven by a function generator at room temperature is used to generate optical pulses with a width of 200 ns at a repetition frequency of 120 Hz. The incident photons are then attenuated and guided into the device box mounted at the mixing chamber stage through a bare optical fiber. In this demonstration experiment, we did not optimize the optical coupling to the absorber and the light exiting the fiber flood illuminates the entire chip instead of being focused only onto the absorber area. As a result, the optical efficiency is rather low, which we plan to improve in future experiments. As shown in Fig. 1(d), the standard homodyne scheme is used to read out the resonators. We probe the resonators at a microwave frequency that maximizes the frequency response δ S_21/δ f_r and the microwave power is chosen to be 2 dB below bifurcation power to avoid the strong non-linear effects <cit.> in the resonator. For each optical pulse, the corresponding response of the detector is digitized at a sampling rate of 2.5 Ms/s. The raw data are converted to the frequency and dissipation responses. Only the frequency response data are further analyzed, because the dissipation response is smaller compared to the frequency response and the dissipation pulse decay time is much faster (see Fig1. (b))due to the anomalous electrodynamic effect found previously in TiN films <cit.>. Note that we have used a rigorous non-linear fitting procedure to directly convert the pulse trajectory in the IQ plane to the fractional frequency shift, because the response in fractional frequency shift unit is always linearly proportional to the change in the quasiparticle density, even when the pulse response is large (approaching the resonator line-width) and the phase shift becomes nonlinear. We analyze the pulse data by using standard Weiner optimal filter procedures and the filtered pulse height data are used to generate photon-counting statistics.Fig. 2(a) shows a histogram of the optimally filtered pulse height data for 2× 10^4 pulse events measured from the resonator with absorber width of 2 μm and volume of 1.92 μm^3. The first 3 peaks, which correspond to the events of 0, 1, and 2 photons being absorbed in the detector, are clearly observed. We fit the histogram to a model of a superposition of 4 Gaussian peaks with independent heights and widths, as shown by the red profile in Fig. 2(a). The FWHM energy resolution Δ E_n of the n-photon peak is related to the standard deviation σ_n of the n-th Gaussian peak by:Δ E_n=2√(2ln(2))σ_n/A_n-A_n-1hν, n=1,2,...where hν = 0.80 eV is the energy of a single 1550 nm photon and A_n is the pulse height of the n-photon peak. The obtained FWHM energy resolutions for the 1-photon and 2-photon peaks areΔ E_1 = 0.34 eV and Δ E_2 = 0.42 eV respectively. Here we claim a peak is resolved if Δ E/hν<1. According to this criterion, this detector has the sensitivity to resolve the first 3 peaks (0-, 1- and 2-photon). According to the stochastic nature of the photon detection process, the n-photon events should obey Poisson statistics. Indeed, as shown in the inset of Fig. 2(a), the counts in the n-photon peak (proportional to the area of each Gaussian) normalized by the total counts match a Poisson distribution with λ = 0.61. λ is the mean photon number absorbed by the detector, suggesting that our detector detects an average of 0.61 photons per pulse event.Fig. 2(b) shows the photon counting histogram at a higher input optical power, corresponding to a mean photon number λ = 1.95. The first 6 (0- to 5-photon) peaks are resolved with the energy resolutions ofΔ E_1 = 0.36 eV and Δ E_2 = 0.45 eV for the 1-, and 2-photon peak respectively, which both slightly increase from Fig. 2(a). Fig. 2(c) shows the histogram at an even higher optical power with a mean photon number of λ = 3.78, where the first 8 (0- to 7-photon) peaks are resolved. In the 3 histograms shown in Fig. 2, we see that the 1- and 2-photon peaks are clearly broadened as compared to the 0-photon peak, indicating that additional noise arises when photons are absorbed and the energy resolution for the n-photon peak (n ≥ 1) is not dominated by the background noise of the detector in the dark environment. We speculate the broadening might be related to several factors, including position-dependent response of the absorber, parasitic response from the photons hitting the non-absorber area (e.g., IDC, substrate, feedline), some unknown sources of photon-induced noise. We have simulated the current distribution using Sonnet (an electromagnetic simulation software) and the results show that the current is very uniform throughout the inductor strip to be within 0.4%. This is expected because the dimensions of the inductors (<100 μm) are much smaller than the microwave wavelength (>1 cm around 6 GHz). Since the resonator frequency response is proportional to the local kinetic inductance change weighted by the square of the current distribution <cit.>, broadening of the photon peak due should not be dominated by the non-uniform current distribution in the inductive absorber. In Fig. 2(d), we plot the detected mean photon number as a function of the estimated total number of photons incident onto the absorber area, which is perfectly linear as expected. The incident photon number is estimated from the total optical power measured by a power meter and the solid angle covered by the absorber area at the distance from the absorber to the fiber tip. Due to the low photon absorption efficiency, our detector can absorb and detect only 1 photon for approximately 10 incoming photons hitting the absorber. In this work, we have 13 resonators with different absorber volumes, which allows us to compare the photon counting statistics. The main results are summarized in Fig. 3. Fig. 3(a) shows the 1-photon responsivity (fractional frequency shift δ f_r/f_r induced by absorbing 1 photon) as a function of the absorber volume V. The measured responsivity is fitted well by a linear relation with 1/V. This is expected because δ f_r/f_r∝δ n_qp∝ 1/V, where n_qp is the quasiparticle density.Fig. 3(b) shows the widths (i.e., the standard deviations σ_0 and σ_1 converted to δ f_r/f_r which is a measure of the frequency noise) of the 0-photon peak (black dots) and 1-photon peak (green dots) as a function of V. Both widths roughly fit onto a power-law of V^-0.7 and the 1-photon peak is about ∼ 4.5 times wider than the 0-photon peak. Combining the responsivity data from Fig. 3(a) and the noise data from Fig. 3(b), we derive the 1-photon energy resolution Δ E_1 from Eqn. (1) as a function of V, which is plotted in Fig. 3(c). We see that Δ E_1 increases with V and scales as≈ V^0.3. Our results suggest that the energy resolution improves as the absorber volume is reduced. The best Δ E_1 we obtained is 0.22 eV, corresponding to an energy-resolving power of R = hν/Δ E_1 = 3.7 at 1550 nm, which is achieved in the resonator with the smallest absorber volume of 0.4 μm^3 and also the narrowest inductor width of 1 μm. In Fig. 3(d), we plot the maximum number of photons that can be resolved by each detector N_r as a function of its absorber volume V. We see that N_r drops at both smallest and largest V. N_r drops at large V because the energy resolution degrades as V is increased (Fig. 3(c)). N_r also drops at small V because the large responsivity and high photon number lead to “saturation” of the detector, where the frequency shift of the pulse exceeds the resonator bandwidth and the signal-to-noise ratio is degraded.To increase the bandwidth for operation,we can design resonator with lower Q_c and/or higher resonance frequency f_r. The best theoretical energy-resolving power that can be achieved by a MKID as a pair-breaking detector is given by R= 1/2.355√(η hν/FΔ), where η≈ 0.57 is the conversion efficiency from photons to quasiparticles <cit.>, hν is the energy of the incident photons, Δ = 1.72 k_BT_cis the superconducting gap energy of the absorber material, and F is the Fano factor <cit.>. This predicts a theoretical R = 45 at 1550 nm (a typical value of F = 0.2 is assumed), which is an order of magnitude higher than the R = 3.7 achieved by our best detector. Coincidentally, the optical lumped-element MKIDs <cit.> made from 20-60 nm substoichiometric TiN films have a typical energy-resolving power R = 16 at 254 nm, which is also an order of magnitude below their Fano limit R = 150. While this suggests that TiN-based photon counting detectors have large room to improve, it is important to understand why they “underperform” their theoretical prediction. In fact, η≈0.57 is the ideal conversion efficiency when photons are absorbed in a bulk superconductor. Our film is only 20 nm thick and the high energy phonons may quickly escape the film into the substrate before breaking more quasiparticles, leading to a efficiency η smaller than 0.57 and a smaller response. This phonon loss process may also fluctuate and cause additional noise, as observed in the thin film superconducting tunnel junction photon detectors <cit.>. In future experiments, we plan to futher explore this phonon loss effect, as well as the V^0.3 energy resolution scaling, by testing different thickness of TiN films and by making the absorber on a suspended membrane.Many aspects in our design and experimental setup can be improved. If the responsivity and noise trends still hold below 0.4 μm^3, we expect that better energy resolution can be achieved by using an absorber volume even smaller than 0.1 μm^3. Instead of using T_c≈ 1.4 K trilayer, a lower T_c TiN film with a lower gap energy may further boost the responsivity. Suspending the absorber on a thin silicon membrane may increase the quasiparticle recombination time and the conversion efficiency, as suggested by the “phonon recycling" scheme <cit.>. According to the optical measurement on thin TiN films by Volkonen<cit.>, we estimate that the reflectance and transmittance for our 20 nm TiN film are about 60% and 10% respectively, indicating approximately only 30% photons are absorbed. The photon absorption efficiency can be greatly enhanced by adding anti-reflection coating and embedding the absorber in an optical structure <cit.>. To efficiently collect every photon, the input light should be precisely confined onto the absorber active area, which can be realized using advanced alignment and coupling techniques, such as direct fiber coupling to the detector <cit.> or through a fusion-spliced microlens <cit.>.In conclusion, we have demonstrated photon counting at 1550 nm using TiN/Ti/TiN trilayer MKIDs.Energy resolution as low as Δ E ≈ 0.22 eV is obtained and up to 7-photon events can be resolved. By studying devices with a variety of geometries, we have systematically investigated the dependence of photon counting performance on the absorber volume. The energy resolution improves as the absorber volume is reduced.Further improvements in these detectors are possible by improving the detector design and optimizing the optical coupling to maximize the photon absorption into the absorber. With the energy resolution of our MKID photon counting detectors approaching the performance of TESs (currently a factor of two better), the multiplexing advantage of MKIDs may stand out in applications where a large array of detectors with high photon-resolving power is needed. The MKID devices were fabricated in the NIST-Boulder microfabrication facility. This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 61301031, U1330201). L. F. Wei thanks Profs. C. D. Xie and K. C. Peng for their encouragements and useful discussions.1 Hiskett P. Hiskett, D. Rosenberg, C. Peterson, R. Hughes, S. Nam, A. Lita, A. Miller, and J. Nordholt, New J. Phys. 8, 193 (2006). Knill E. Knill, R. Laflamme, and G. J. Milburn, Nature 409, 46 (2012). Zwinkels J. Zwinkels, E. Ikonen, N. Fox, G. Ulm, and M. Rastello, Metrologia 47, R15 (2010).Finger G. Finger, I. Baker, D. Alvarez, D. Ives, L. Mehrgan, M. Meyer, J. Stegmeier and H. J. Weller, Proc. of SPIE 9148, 914817 (2014). Tsman G. Gol'tsman, O. Okunev, G. Chulkova, A. Lipatov, A. Semenov, K. Smirnov, B. Voronov, A. Dzardanov, C. Williams and R. Sobolewski, Appl. Phys. Lett. 79, 705 (2001). Divochiy A. Divochiy, F. Marsili, D. Bitauld, A. Gaggero, R. Leoni, F. Mattioli, A. Korneev, V. Seleznev, N. Kaurova, O. Minaeva, G. Gol'tsman, K. Lagoudakis, M. Benkhaoul, F. Levy, and A. Fiore, Nature Photonics 2, 302 (2008). Dauler E. Dauler, A. Kerman, B. Robinson, J. Yang, B. Voronov, G. Goltsman, S. Hamilton, and K. Berggren, Journal of Modern Optics 56, 364 (2009). Mattioli F. Mattioli, Z. Zhou, A. Gaggero, R. Gaudio, R. Leoni, and A. Fiore, Optics Express 24, 9067 (2016). Irwin K. D. Irwin, Appl. Phys. Lett. 66, 1998 (1995). Miller A. J. Miller, S. W. Nam, J. M. Martinis, and A. V. Sergienko, Appl. Phys. Lett. 83, 791 (2003). Lita A. Lita, A. Miller and S. Nam, Optics Express 16, 3032 (2008). Calkin B. Calkins, P. Mennea, A. Lita, B. Metcalf, W. Kolthammer, A. Linares, J. Spring, P. Humphreys, R. Mirin, J. Gates, et al., Optics Express 21, 22657 (2013). Lolli L. Lolli, E. Taralli, and M. Rajteri, J Low Temp Phys 167, 803 (2012). Lolli2 G. Brida, L. Ciavarella, I. Degiovanni, M. Genovese, L. Lolli, M. Mingolla, F. Piacentini, M. Rajteri, E. Taralli, and M. Paris, New. J. Phys. 14, 085001 (2012). Lolli3 L. Lolli, E. Taralli, C. Portesi, E. Monticone, and M. Rajteri, Appl. Phys. Lett. 103, 041107 (2013). DayP. K. Day, H. G. LeDuc, B. A. Mazin, A. Vayonakis, and J. Zmuidzinas, Nature 425, 817 (2003). zmuidzinas J. Zmuidzinas, Annual Review of Condensed Matter Physics 3, 169 (2012). yiwen Y. Wang, P. Zhou, L. Wei, H. Li, B. Zhang, M. Zhang, Q. Wei, Y. Fang, and Chunhai Cao, J. Appl. Phys. 114, 153109 (2013). Bumble B. A. Mazin, B. Bumble, S. R. Meeker, K. OBrien, S. McHugh, and E. Langman, Opt. Express 20, 1503 (2012). Jiansong J. Gao, M. Vissers, M. Sandberg, F. Silva, S. Nam, D. Pappas, D. Wisbey, E. Langman, S. Meeker, B. MazinH. Leduc, J. Zmuidzinas, and K. Irwin, Appl. Phys. Lett. 101, 142602 (2012). Vissers M.R. Vissers, J. Gao, M. Sandberg, S. M. Duff, D. S. Wisbey, K.D. Irwin, and D. P. Pappas. Appl. Phys. Lett. 102, 232603 (2013). hannesJ. Hubmayr, J. Beall, D. Becker, H.-M. Cho, M. Devlin, B. Dober, C. Groppi, G. C. Hilton, K. D. Irwin, D. Li, P. Mauskopf, D. P. Pappas, J. Van Lanen, M. Vissers, Y. Wang, L. F. Wei, and J. Gao, Appl. Phys. Lett. 106, 073505 (2015).gao2008J. Gao, M. Daal, J. Martinis, A. Vayonakis, J. Zmuidzinas, B. Sadoulet, B. Mazin, P. Day, and H. Leduc, Appl. Phys. Lett. 92, 212504 (2008).PropTiN J. Gao, M. R. Vissers, M. Sandberg, D. Li, H. M. Cho, C. Bockstiegel, B. A. Mazin, H. G. Leduc, S. Chaudhuri, D. P. Pappas,and K. D. Irwin, J Low Temp Phys, 176, 136 (2014). gaothesis J. Gao, Ph.D. thesis, Caltech, 2008. STJ D. D. E. Martin, P. Verhoeve, A. Peacock, A. G. Kozorezov, J. K. Wigmore, H. Rogalla, and R. Venn, Appl. Phys. Lett. 88, 123510 (2006).FanoU. Fano,Phys. Rev. 72, 26(1947). MarsdenD. Marsden, B. A. Mazin, B. Bumble, Seth Meeker, K. O'Brien, S. McHugh, M. Strader, and E. Langman, Proc. SPIE8453, 84530B (2012). Marsden2B. A. Mazin, S. R. Meeker, M. J. Strader, P. Szypryt, D. Marsden, J. C. van Eyken, G. E. Duggan, A. B. Walter, G. Ulbricht, M. Johnson, B. Bumble, K. O'Brien, and C. Stoughton, Publications of the Astronomical Society of the Pacific, 125, 1348 (2013). FyhrieA. Fyhrie, C. McKenney, J. Glenn, H. G. LeDuc, J. Gao, P. Day, and Jonas Zmuidzinas, Proc. SPIE9914, 99142B (2012). UlbrichtG. Ulbricht, B. A. Mazin, P. Szypryt, A. B. Walter, C. Bockstiegel, and B. Bumble, Appl. Phys. Lett. 106, 251103 (2015). Valkonen E. Valkonen, C. Ribbing, and J. Sundgren, Applied Optics 25, 3624 (1986). Lita2 A. Lita, B. Calkins, L. Pellouchoud, A. Miller, and S. Nam, Proc. of SPIE 7681, 76810D-1 (2010). Miller2 A. Miller, A. Lita, B. Calkins, I. Vayshenker, S. Gruber, and S. Nam, Optics Express 19, 9102 (2011). Dauler2 E. A. Dauler, M. E. Grein, A. J. Kerman, F. Marsili, S. Miki, S. W. Nam, M. D. Shaw, H. Terai, V. B. Verma, and T. Yamashita, Opt. Eng.53, 081907 (2014).
http://arxiv.org/abs/1702.07993v3
{ "authors": [ "W. Guo", "X. Liu", "Y. Wang", "Q. Wei", "L. F. Wei", "J. Hubmayr", "J. Fowler", "J. Ullom", "L. Vale", "M. R. Vissers", "J. Gao" ], "categories": [ "physics.ins-det", "cond-mat.mes-hall" ], "primary_category": "physics.ins-det", "published": "20170226070800", "title": "Counting Near Infrared Photons with Microwave Kinetic Inductance Detectors" }
theoremTheorem lemmaLemma propositionProposition corollaryCorollary definitionDefinition remarkRemark#1⟨#1⟩[address_wang]Jingyao Wang and Zhisheng Duan are with the Department of Mechanics and Engineering Science, Peking University, Beijing 100871, China, (yayale.8@163.com, duanzs@pku.edu.cn).[address_mahmound]Mahmoud Ashour and Constantino Lagoa are with the Department of Electrical Engineering and Computer Science, Pennsylvania State University, University Park, PA 16802, USA, (mma240@psu.edu, lagoa@engr.psu.edu). [address_necdet]Necdet Aybat is with the Department of Industrial Engineering, Pennsylvania State University, University Park, PA 16802, USA, (nsa10@psu.edu).[address_hao]Hao Che is with the Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 769019, USA, (hche@cse.uta.edu).[acknowledgement] This work was partially supported by NSF grants CNS-1329422, CMMI-1635106, FCC-1629625, NNSF of China grants 61673026 and the China Scholarship Council. This paper considers the optimization-based traffic allocation problem among multiple end points in connectionless networks. The network utility function is modeled as a non-concave function, since it is the best description of the quality of service perceived by users with inelastic applications, such as video and audio streaming. However, the resulting non-convex optimization problem, is challenging and requires new analysis and solution techniques. To overcome these challenges, we first propose a hierarchy of problems whose optimal value converges to the optimal value of the non-convex optimization problem as the number of moments tends to infinity. From this hierarchy of problems, we obtain a convex relaxation of the original non-convex optimization problem by considering truncated moment sequences. For solving the convex relaxation, we propose a fully distributed iterative algorithm, which enables each node to adjust its date allocation/ rate adaption among any given set of next hops solely based on information from the neighboring nodes. Moreover, the proposed traffic allocation algorithm converges to the optimal value of the convex relaxation at a O(1/K) rate, where K is the iteration counter, with a bounded optimality. At the end of this paper, we perform numerical simulations to demonstrate the soundness of the developed algorithm. § INTRODUCTIONApplications and services supported by modern communication networks have diverse requirements, e.g., high throughput and low latency. Traffic engineering (TE) has long been used to optimize the utilization of the limited network resources so that such requirements are fulfilled. This entails developing data rate allocation algorithms and congestion control protocols capable of maximizing a given network utility subject to network resource constraints <cit.>. Many problems of recent interest arising in diverse fields can be cast as an optimization problem, and network utility maximization (NUM) is no different.In large-scale networks, the size of the optimization problems rapidly increases as the number of nodes and links increase. This stimulates the necessity of developing decentralized control algorithms capable of decomposing the high-dimensional problem into separate moderate-size subproblems that can be solved independently and locally at various network nodes. The main idea behind such decentralized control algorithms is to distribute the computations required for the solution of the optimization problem among various nodes <cit.>-<cit.>. This approach exploits local information available at each node. Nevertheless, information exchange among different nodes is inevitable since distinct data flows share the same network resources. Therefore, distributed optimization approaches not only aim at decomposing the problem, but also minimizing the communication overhead.In the benchmark work by Kelly et. al. <cit.>, the optimization of the utility of a large-scale broadband network with limited bandwidth resources is considered. The authors propose two classes of rate control algorithms by casting the NUM problem in both primal and dual forms. In <cit.>, a family of decentralized sending rate control laws are proposed to steer the traffic allocation to an optimal operating point while avoiding congestion. A non-linear control theoretic approach is employed in <cit.> to derive adaptation laws that enable each node to independently distribute its traffic optimally among any given set of next hops. More recently, reference <cit.> considers the NUM, derives its dual problem, and uses a distributed gradient-based approach for its solution. A similar approach appears in <cit.>. In spite of the existence of a relatively dense literature on NUM, most available results consider only the optimization of concave utility functions. However, it has been shown that the reward experienced by the users of real-time applications, such as video and audio streaming, cannot be accurately modeled using concave functions. Reference <cit.> shows that the video quality perceived by users on a mobile device is a non-decreasing and step-like function with respect to the data rate, because users have almost similar quality of experience on 3 Mbps and 1 Mbps <cit.>. This observation motivates considering the optimization of non-concave network utility functions, which constitutes a main focus of this paper.Non-concave NUM is a non-convex optimization problem; hence, it is difficult to solve. Nevertheless, there exist some attempts in the literature for deriving algorithms that provide near-optimal solutions. Reference <cit.> develops a centralized algorithm that solves the NUM problem with polynomial utilities. Reference <cit.> determines the conditions under which the standard distributed dual-based algorithm can still converge to the global optimal solution with non-concave utilities.This paper develops a distributed iterative algorithm for the optimization of a generalized class of non-concave network utility functions that capture a wide variety of real-world applications. In particular, we focus on connectionless networks, where each node is required to distribute its traffic among a set of next hops without prior arrangement so that the network utility is maximized. We handle the challenge posed by the non-convexity of the optimization problem by developing a sequence of convex relaxations whose solution converges to that of the original problem. We use results on polynomial optimization and moment sequences to derive the convex relaxations <cit.>. Furthermore, we propose an iterative primal-dual algorithm <cit.> that enables each node to distribute its traffic among the set of next hops. We emphasize on the distributed nature of the algorithm, where each node uses its local information and need not communicate with other nodes except its direct neighbors.§ NOTATION Throughout this paper, the traffic flows are assumed to be described by a fluid flow model, and the only resource constraint taken into account is link bandwidth. In the remainder of this paper, call and flow will be used interchangeably. Let 𝒩 denote the set of nodes in the network, and ℒ⊂𝒩×𝒩 denote the set of links connecting particular pairs of nodes. We assume that each link l ∈ℒ has a finite capacity c_l>0. Moreover, let 𝒮≜{ s_1,s_2,…,s_n} and 𝒟≜{ d_1,d_2,…,d_n} denote respectively the set of source nodes and the set of destination nodes contained in 𝒩 such that 𝒮∩𝒟= ∅. The intended destination for each source node s_i is d_i for i∈ℐ≜{1,…,n}, i.e., without loss of generality, we assume that there is a one-to-one correspondence between𝒮 and 𝒟, and ℐ denotes the set of different flow (call) types in the network. Given source node s∈𝒮, let ℒ_s denote the set of links connected to it. Let the sending data rate through link l∈ℒ_s be x_s,l^out, and all such sending data rates be 𝐱_s^out≜ [x_s,l^out]_l∈ℒ_s. We define the aggregate sending data rate of s∈𝒮 be denoted by r_s≜∑_l∈ℒ_sx_s,l^out. Also, let ℬ≜𝒩∖(𝒮∪𝒟)= {b_1,b_2,…,b_m} denote the set of forwarding nodes contained in 𝒩. Given b∈ℬ, let ℐ_b be the set of flows visiting node b, and ℒ_b⊆ℒ denote the set of links connected to it. Suppose ℒ_b,i^out⊆ℒ_b denote the set of outgoing links from b associated with calls (flows) of type i∈ℐ_b. Similarly, let ℒ_b,i^in⊂ℒ_b denote the set of incoming links to b associated with calls (flows) of type i ∈ℐ_b. Furthermore, given b∈ℬ, for each i∈ℐ_b and l∈ℒ_b,i^out, let x_i,b,l^out denote the data rate of call type i ∈ℐ_b, associated withs_i and d_i, forwarded from node b through link l∈ℒ_b,i^out. The above notation is exemplified in Fig. 1 for the case of allocating flows associated with two source nodes, s_1 and s_2, and two destination nodes, d_1 and d_2.Given b∈ℬ and l∈ℒ_b, let ℐ_b,l^in⊂ℐ be the set of call types forwarded to node b through link l, and ℐ_b,l^out⊆ℐ_b be the set of call types forwarded from node b through link l. Moreover, given node b∈ℬ and link l∈ℒ_b, let e_l(b) denote the adjacent node to b through link l. We summarize all the notation for the communication network in Table I for the convenience of the reader. Now, given node b∈ℬ, let the vector containing all flow rates departing from node b through link l∈ℒ_b be denoted by 𝐱_b,l^out≜ [x_i,b,l^out]_i∈ℐ_b,l^out∈R_+^|ℐ_b,l^out|, where |.| denotes the cardinality of a set. Given node b∈ℬ and l∈ℒ_b, let 1_b,l∈R^1× |ℐ_b,l^out| be the row vector with all elements equal to 1. In a similar way, let δ_b,l∈R^1× |ℐ_b,l^in| be the row vector with all elements equal to 1 if link l is bidirectional, and 0 otherwise. Also, let . denote the Euclidean norm. Given a convex set 𝒜, let 𝐼_𝒜(.) denote the indicator function of 𝒜, i.e., 𝐼_𝒜(ω)=0 for ω∈𝒜 and equal to + ∞ otherwise, and let 𝑃_𝒜(ω)≜argmin{υ-ω: υ∈𝒜} denote the projection onto 𝒜. Given a closed convex set 𝒜, we define the distance function as d_𝒜(ω)≜𝑃_𝒜(ω)-ω. Also, 𝐈_n is the n× n identity matrix.§ PROBLEM FORMULATION Consider a communication network consisting of a set of source nodes 𝒮. Each source node s∈𝒮 has a local utility function U_s(r_s):R_+→R_+ of its sending data rate r_s. For a fixed order ℓ >0, the utility function is defined as a general non-concave polynomial-like function in the formU_s(r_s)≜∑_j=0^ℓ p_s,j (r_s)^j/ℓ.This particular form of objective functions is so flexible that it can be used to approximate a wide variety of functions arising in practical applications such as step functions for the video streaming case <cit.>.The objective of this paper is to design a data rate allocation algorithm for the communication network such that the utilization of resources is maximized, while satisfying the network resource constraints. The network resource constraints considered in this paper include link capacity constraints, Minimum Rate Guaranteed and Upper Bounded Rate Service (MRGUBRS) requirements, and flow conservation constraints through nodes.More precisely, for any link l∈ℒ, the aggregated flows going through this link should not exceed the link capacity. For example, in Fig. 1, the bidirectional link l_3 is shared by flows belonging to two source nodes. The data rates x_1,b_2,l_3^out and x_2,b_3,l_3^out going through this link should satisfy thatx_1,b_2,l_3^out + x_2,b_3,l_3^out≤ c_l_3.For the unidirectional link l_2, node b_2 forwards data rate x_2,b_2,l_2^out through this link. Then, x_2,b_2,l_2^out is upper bounded by c_l_2.Given flows of type i∈ℐ, recall that flows of type i∈ℐ is associated with source/destination pairs s_i/d_i. For fixed link l∈ℒ_s_i, the corresponding data rate x_i,l^out is determined at source node s_i ∈𝒮 and multiple paths are available for transporting these flows. More precisely, each node on these paths divide incoming traffic into available links by striving to conserve the flows belonging to each source node (i.e., aims at no losses) and to avoid link congestion. In Fig. 1, node b_3 tries to satisfyx_1,b_2,l_3^out=x_1,b_3,l_4^out+x_1,b_3,l_7^out. Finally, flows belonging to each source node s∈𝒮 is assumed to be of the MRGUBS category, i.e., for some 0< ξ_s<ζ_s and s∈𝒮,ξ_s≤ r_s≤ζ_s.Now, considering the above constrains and assumptions, we can formulate the problem of optimal traffic allocation as follows:maximize∑_s∈𝒮 U_s(r_s),subject to the network capacity constraints [Note that the formulation in this paper allows for the existence of bidirectional links.]∑_i ∈ℐ_b,l^out x_i,b,l^out+∑_i ∈ℐ_b,l^inx_i,e_l(b),l^out≤ c_l, l ∈ℒ_b, b ∈ℬ,the flow conservation constraints at each node∑_l ∈ℒ_b,i^in x_i,e_l(b),l^out- ∑_l̃∈ℒ_b,i^outx_i,b,l̃^out=0, i ∈ℐ_b,b ∈ℬ,the non-negativity of forwarded data rates constraintsx_i,b,l^out≥ 0, i∈ℐ_b,l^out, l∈ℒ_b,b∈ℬ,and the MRGUBS requirements( 𝐱_s^out, r_s)∈𝒳_s, s∈𝒮,where the set 𝒳_s is defined as𝒳_s≜{( 𝐱_s^out, r_s)∈R_+^|ℒ_s|×R_+: ξ_s ≤ r_s≤ζ_s, r_s= ∑_l∈ℒ_s x_i,l^out}.Most literature in the context of NUM considers maximizing concave diminishing functions. However, modern communication networks are dominated by various inelastic applications, such as internet video and audio streaming. Users' satisfaction for these applications cannot be modeled with concave functions. It is better to be described as non-concave functions. For instance, the utility for voice applications is a sigmoidal function <cit.>. Thus, we consider users' perceived qualification of Cost of Service (CoS) and model the utility function as a general class of non-concave polynomial functions. Moreover, the challenges of attempting to solve the resulting traffic allocation problem (<ref>) are two-fold. First, the optimization problem obviously constitutes a non-convex problem since its objective function is non-concave. Second, global information on fast timescale events, as required in the above formulation, is not generally available. The latter fact stimulates the necessity of developing a distributed algorithm that converges to the optimal data rate allocation of the non-convex NUM problem. § MAIN RESULTSIn this section, we present our approach used to overcome the challenges opposed by the non-convexity of the optimization problem. In particular, we first present a convex relaxation to the non-convex NUM problem (<ref>). This convex relaxation is chosen from a hierarchy of optimization problems whose optimal value converges to the optimal value of problem (<ref>) as the number of moments tends to infinity. For solving the convex relaxation problem, we propose a distributed primal-dual algorithm (DPDA), which enables all nodes to update their data rate allocation solely using immediate local information. A salient feature of the proposed algorithm is that the iterate sequence converges to the optimal solution at a O(1/K) rate, where K is the iteration counter, with a bounded optimality. §.§ NUM convex relaxationThe non-convexity of the optimization problem (<ref>) opposes challenges for us to analysis and solve the traffic allocation problem. However, the following proposition provides a hierarchy of optimization problems whose optimal value converges to the optimal value of the non-convex optimal problem (<ref>). For solving the traffic allocation problem, we choose a convex one from this hierarchy of problem by truncating the number of moments to the finite case. This proposition is one of the main results of this paper. The solution of the following optimization problem converges to the solution of the non-convex NUM problem (<ref>) with non-concave user utility functions of the form (<ref>) as the positive parameter α→∞. Moreover, problem (<ref>) is convex if α≤ℓ.𝐱maximize∑_s ∈𝒮𝐩_s^T𝐦_ssubject tom_s,0=1,s∈𝒮,𝐌(0, α,𝐦_s)≽ 0,s∈𝒮, β_s𝐌(0, α-2,𝐦_s)- 𝐌(2, α,𝐦_s) ≽ 0,m_s,j≤ (r_s)^j/ℓ, j∈{1,…,α}, s ∈𝒮, x_s,l^out≤ c_l, l ∈ℒ_s,s∈𝒮,1_b,l𝐱_b,l^out+δ_b,l𝐱_e_l(b),l^out≤ c_l,l ∈ℒ_b, b ∈ℬ, B 𝐱=0, ( 𝐱_s^out, r_s)∈𝒳_s, s∈𝒮,𝐱_b,l^out≽ 0, l∈ℒ_b,b∈ℬ.The objective function is a linear function of variables 𝐦_s=[m_s,j]_j ∈{0,,α} with parameters 𝐩_s=[p_s,j]_j ∈{0,,α}. The decision variable 𝐱 of problem (<ref>) is a vector consisting of the data rate x_s,l^out, r_s and 𝐦_s for each s∈𝒮, and the sending data rate 𝐱_b,l^out, b∈ℬ for each l∈ℒ. More precisely, the dimension of vector 𝐱 is ∑_s∈𝒮 (|ℒ_s |+α+2)+∑_b∈ℬ∑_i∈ℐ_b|ℒ_b,i^out|. In the constraints, B∈R^(∑_b∈ℬ |ℐ_b|)×(∑_s∈𝒮 (|ℒ_s |+α+2)+∑_b∈ℬ∑_i∈ℐ_b|ℒ_b,i^out| ) denotes the edge-node-like incidence matrix, i.e., the entry B_(s,b,l),ω, corresponding to flow-node-link triplet (s,b,l)∈𝒮×ℬ×ℒ and ω∈𝐱, equal to 1 if the data rate ω of flows belonging to source node s is forwarded from node b through link l, -1 if the the data rate ω is received at node b, and 0 otherwise.β_s is a known upper bound on the aggregate data rate of source s∈𝒮, and the moment matrices 𝐌∈R^h+1×R^h+1 are of the form𝐌(k,k+2h,𝐦_s) = [[ m_s,k m_s,k+1 … m_s,k+h; m_s,k+1 ⋱ ⋱ m_s,k+h+1; ⋮ ⋱ ⋱ ⋮; m_s,k+h … …m_s,k+2h ]].The proof is shown in Appendix A. Hereafter, we use α = ℓ. It is worth mentioning that the result of Proposition 1 holds for the even order ℓ. Nonetheless, similar results can be derived for the odd ℓ, which is omitted for brevity. The proposed problem (<ref>) constitutes a convex optimization problem, because it maximizes the sum of linear functions subject to convex constraints. Therefore, it can be easily solved if global information is available. Nevertheless, the objective of this paper is to solve this problem in a distributed fashion that leverages per hop information available at each node.Before moving on, we introduce some notation that renders the formulation of (<ref>) conveniently compact. For every s ∈𝒮, let the set 𝒜_s be defined as𝒜_s={(𝐱_s^out, 𝐦_s,r_s)∈R_+^|ℒ_s|×R^ℓ+1×R_+: m_s,0=1, 𝐌(0, ℓ, 𝐦_s) ≽ 0,β_s𝐌(0, ℓ-2, 𝐦_s)-𝐌(2,ℓ, 𝐦_s) ≽ 0, x_s,l^out≤ c_l, l∈ℒ_s, m_s, j≤ (r_s)^j/ℓ, j∈{1,…,ℓ}, ( 𝐱_s^out, r_s)∈𝒳_s}. §.§ Algorithm DPDAThe constrains set of convex relaxation (<ref>) consists of local constraints, e.g., capacity constraints and global constraints, e.g., flow conservation constraints through nodes. The existence of global constraints renders difficulty for us to solve problem (<ref>) in a distributed fashion. However,the primal-dual method, proposed by Chambolle and Pock in <cit.> for solving convex-concave saddle point problems makes it possible. This algorithm can be adapted to solve the multi-agent consensus optimization problem as discussed in <cit.>. We also use the distributed primal-dual algorithm in <cit.> to solve our traffic allocation problem (<ref>). We present the resulting iterative algorithm, i.e., DPDA, of which iterate sequence converges to the solution of (<ref>). The details of developing DPDA can be found in Appendix B.The suboptimality and feasibility of the DPDA iterate sequence can be bounded as in the following theorem. Given the communication network and the convex optimization problem (<ref>). Let d_s>0, s∈𝒮 and d_i,b,l>0, i∈ℐ_b,l^out, l∈ℒ_b, b∈ℬ be given (sufficiently large) constants. Recall that the decision variable 𝐱 of problem (<ref>) is a vector consisting of the data rate 𝐱_s,l^out, r_s and 𝐦_s for each s∈𝒮, and the sending data rate 𝐱_b,l^out, b∈ℬ for each l∈ℒ. Also recall that vector variables λ,θ are the dual variables associated with the capacity constraints and the flow conservation constraints at nodes, respectively. Let (𝐱^⋆,λ^⋆,θ^⋆) be an arbitrary saddle-point for the Lagrange function of problem (<ref>), and {𝐱^k}_k≥ 0 be the iterate sequence generated using Algorithm DPDA, initialized from an arbitrary 𝐱^0 and [λ_b,l^0]_l∈ℒ_b,b∈ℬ=0. Let the primal-dual step sizes [τ_s]_s∈𝒮,[τ_i,b,l]_i∈ℐ_b,l^out,l∈ℒ_b,b∈ℬ and γ be positive constants satisfying the following inequalities 1/τ_s-γ(4+d_s ) ≥ 0,for all s∈𝒮, and 1/κ_b, l(1/τ_i,b,l-γ(4+d_i,b,l ) )≥ m_l+1,for all i∈ℐ_b,l^out,l∈ℒ_b,b∈𝒩, where m_l is the total number of sources using link l to transport flows. Denote the average of sending data rates by 𝐱̅^K≜1/K∑_k=1^K 𝐱^k, where K≥ 1. Then, {𝐱̅^K} converges to the maximum of the utility function of the problem (<ref>) subject to the resource allocation constraints. In particular, the average of the iterative sequence asymptotically converges to the feasible solution, i.e.,θ^⋆B𝐱̅^K+∑_b∈ℬ∑_l∈ℒ_bλ^⋆_b,l h(𝐱̅_b,l^out,𝐱̅_e_l(b),l^out) ≤Θ_1/K, ∀ K≥ 1.It also asymptotically maximizes the utility function of the problem (<ref>), i.e.,| ∑_s∈𝒮𝐩_s^T(𝐦̅_s-𝐦_s^⋆)| ≤Θ_1/K, ∀ K≥ 1,where the notation h(𝐱̅_b,l^out,𝐱̅_e_l(b),l^out) and Θ_1 is defined in Appendix C. The proof is presented in Appendix C.Algorithm DPDA is a fully distributed traffic allocation algorithm.This point can be verified by looking through the implementation procedure.The step-size parameters are decided before implementing the algorithm. It is given in Theorem <ref> that those parameterssatisfy conditions (<ref>) and (<ref>), both of which are local conditions. Thus, choosing the parameters requires no global information. In the first step, the variables z_s,l, l∈ℒ_s, s∈𝒮 and z_i,b,l, i ∈ℐ_b,l^out, l ∈ℒ_b, b∈ℬ are local variables respectively introduced for each source node and each forwarding node. It is worth noting that giving the initial state value of x_s,l^out, l∈ℒ_s, s∈𝒮 and x_i,b,l^out, i ∈ℐ_b,l^out, l ∈ℒ_b, b∈ℬ to those introduced variables is also a local operation. For the first iteration, i.e., K=1, in steps 3 and 4, DPDA enables all nodes to update their sending data rates in parallel. Each node solely uses immediate information from its neighboring nodes to perform all computations. In step 5, the link price λ_b, l^k+1, l ∈ℒ_b,b∈ℬ is updated with new local data rate allocation solution. This step can be performed at both end points that each link connects, which just uses their local information. Step 6 updates the introduced local variables with the new local data rate allocation solution. The iterative procedure continues until the iterate sequence converges to the optimal solution. It follows from inequalities (<ref>) and (<ref>) that DPDA converges at the rate of 𝑂(1/K), where K is the number of iterations.If the problem (<ref>) has a unique solution, then the sequence of sample averages converges to that solution. § SIMULATION RESULTSIn this section, we present some simulation results which exemplify the behavior of the proposed algorithm, i.e., Algorithm DPDA. The simulations show that the final data rate allocation results in a value of the utility function barely distinguishable from the optimal one.We consider the network model shown in Fig. 2, where we also show all the links' bandwidths, and source-destination pairs. The network model allows for multiple paths available for flows belonging to each source node. We consider a total of 8 different combinations of source/destination nodes. Moreover, we list the prescribed next hops for all forwarding nodes b_i, i=1,…,8, in Table II. For example, the upper left cell means that node b_1 forwards the data of source s_1 to nodes b_2 and b_7.The objective throughout the simulation is to maximize the sum utility of source nodes, where source s_i, i=1,…,8, has the utility function given byU_s_i(r_s_i) = 1.763(r_s_i)^1/6 -20.718(r_s_i)^2/6 +88.568(r_s_i)^3/6-169.102(r_s_i)^4/6 +145.167(r_s_i)^5/6-44.677(r_s_i)^6/6.U_s_i(r_s_i) is a step-like non-concave polynomial-like function. We consider to optimize a step-like non-concave function, because it is more likely to describe the video quality perceived by a user in a video streaming application <cit.>. Moreover, we obtain the resource constraints information from Fig. 2 and Table. II, and impose the lower and upper bounds on the aggregate data rate of each user as ξ_s_i=0 and ζ_s_i=10, i=1,…,8, respectively. Given the network topology shown in Fig. 2, we choose the step-size parameters to satisfy the convergence condition set forth by Theorem 2. All step-size parameters are chosen locally using local information. Fig. 3 shows the performance of Algorithm DPDA for these step-size parameters. It can be seen that the utility function converges to the optimal one, which is obtained by using Genetic Algorithm while assuming the availability of global information. Although all the computations of DPDA are performed locally at each node, it attains almost the same network utility obtained by a centralized optimization algorithm. This implies that the iterate sequence of Algorithm DPDA can indeed converge to the optimal traffic allocation.Fig. 4 shows the representative data rate trajectories for MRGUBS flows belonging to source nodes s_3 and s_4. Both data rate sequences are generated by DPDA. It can be seen from Fig. 4 that the MRGUBS requirements are satisfied. § CONCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCHIn this paper, we proposed a distributed traffic allocation algorithm, i.e., DPDA, to allow distributed optimal traffic engineering in a connectionless autonomous network. DPDA is distributed and converges at a 𝑂(1/K) rate, where K is the number of iterations. Moreover, numerical simulation results showed that the behavior of DPDA mimics the optimal traffic distribution.The results presented in this paper are just the first step towards the implementation of an optimal fast distributed algorithm for traffic engineering. There are many issues that need further consideration. In particular, efforts should be put on testing the implementation in large-scale network settings. § APPENDIX A. PRELIMINARY RESULTS AND PROOF OF PROPOSITION 1In this Appendix, we include some results from real analysis theory and the main steps of proving Proposition 1. §.§ Preliminary resultsIn this subsection, we first recall some results from real analysis theory which are fundamental for the traffic allocation in connectionless networks.Let f be an arbitrary real-valued function, ℱ be a compact set, not necessarily convex, and μ be a probability measure. Then,xinf{ f(x) : x ∈ℱ} = μinf{∫ f d μ : supp(μ) ⊂ℱ},where supp(μ) denotes the support of the measure μ. For the sake of completeness, we briefly mention the main steps of this well-known fact. Let x^⋆∈ℱ be a minimizer of f such that f(x)≥ f(x^⋆) for every x ∈ℱ. Then, we have ∫ f d μ≥ f(x^⋆) hold for every probability measure μ with supp(μ) ⊂ℱ. That is to say, we have the following inequality holdxinf{ f(x) : x ∈ℱ}≤μinf{∫ f d μ : supp(μ) ⊂ℱ}. On the other hand, we have ∫ f d δ_x^⋆= f(x^⋆), where δ_x^⋆ is the Dirac measure of x^⋆ on the set ℱ. Since δ_x^⋆ is a particular probability measure with supp(δ_x^⋆) ⊂ℱ and ∫ f d δ_x^⋆= f(x^⋆), we have xinf{ f(x) : x ∈ℱ}≥μinf{∫ f d μ : supp(μ) ⊂ℱ}. In conclusion, the result of Lemma<ref> is established by (<ref>) and (<ref>). We proceed with the following theorem <cit.> that provides necessary and sufficient conditions for the existence of Borel measures whose support is included in bounded symmetric intervals of the real line.Given a sequence 𝐭≜{t_j}_j=1^ℓ and a scalar ϵ>0, there exists a Borel measure μ(.) with support contained in 𝒴≐ [-ϵ,ϵ] such that μ(𝒴)=1 and t_j=E_μ[y^j]=∫_𝒴 y^jμ(dy) is true if and only if * when ℓ=2k+1 (odd case), the following holdsϵ𝐌(0,2k,𝐭) ≽𝐌(1,2k+1,𝐭)𝐌(1,2k+1,𝐭) ≽ -ϵ𝐌(0,2k,𝐭),* when ℓ=2k (even case), the following holds𝐌(0,2k,𝐭) ≽ 0ϵ^2𝐌(0,2k-2,𝐭) ≽𝐌(2,2k,𝐭), where 𝐌(k,k+2h,𝐭) ∈ℝ^(h+1) × (h+1) is a Hankel matrix of the form𝐌(k,k+2h,𝐭) = [[ t_k t_k+1 … t_k+h; t_k+1 ⋱ ⋱ t_k+h+1; ⋮ ⋱ ⋱ ⋮; t_k+h … …t_k+2h ]].and t_0=1.Direct application of Theorem III.2.3 and Theorem III.2.4 in <cit.>.§.§ Proof of Proposition 1We note that the problem can be converted into apolynomial optimization form with a change of variables y_s=(r_s)^(1/ℓ). The equivalent problem is stated as follows.𝐱maximize∑_s∈𝒮∑_j=0^ℓ p_s,j(y_s)^jsubject to y_s≤ (r_s)^(1/ℓ),s ∈𝒮,∑_i ∈ℐ_b,l^out x_i,b,l^out+∑_i ∈ℐ_b,l^inx_i,e_l(b),l^out≤ c_l, l ∈ℒ_b, b ∈ℬ,∑_l ∈ℒ_b,i^out x_i,b,l^out- ∑_l̃∈ℒ_bx_i,e_l̃(b),l̃^out=0, i ∈ℐ_b,b ∈ℬ, ( 𝐱_s^out, r_s)∈𝒳_s,s ∈𝒮, x_i,b,l^out≥ 0, i∈ℐ_b,l^out, l∈ℒ_b,b∈ℬ.Note that the feasible set in (<ref>) is convex. However, the equivalent problem is still a non-convex problem, because of the non-concavity of the utility function. Then, instead of working with y_s, we optimize over moments of probability distributions in the space of y_s. More precisely, suppose y_s is a random variable and we denote by m_s,j the j-th moment of y_s for some probability measure μ, i.e., m_s,j=E_μ[y_s^j].Now, we consider transforming problem (<ref>) into an optimization problem over the space of probability measures of y_s with a support contained in the feasible set of (<ref>).* Based on Lemma 1, the objective function becomes∫∑_s∈𝒮∑_j=0^ℓ p_s,j(y_s)^j d μ_i = ∑_s ∈𝒮𝐩_s^T𝐦_s. * The first three constraints in (<ref>) are justified by Theorem 1.* We use the set of constraintsm_s,j≤ r_s^j / ℓ,   j ∈{1,,α}to approximate the constraint y_s≤ (r_s)^(1/ℓ).* The left hand of each constraint ∑_i ∈ℐ_b,l^out x_i,b,l^out+∑_i ∈ℐ_b,l^inx_i,e_l(b),l^out≤ c_l for l∈ℒ_b, b∈ℬ, is written as 1_b,l𝐱_b,l^out+δ_b,l𝐱_e_l(b),l^out. In a similar way, we rewrite constrains ∑_l ∈ℒ_b,i^out x_i,b,l^out- ∑_l̃∈ℒ_b,i^outx_i,e_l̃(b),l̃^out=0, i ∈ℐ_b,b ∈ℬ in a matrix form, i.e., B𝐱=0. In conclusion, Lemma 1, Theorem 1 and (<ref>) establish the result of Proposition 1. § APPENDIX B. DERIVATION OF DPDAThe constrains set of convex relaxation (<ref>) consists of local constraints, e.g., capacity constraints and global constraints, e.g., flow conservation constraints through nodes. The existence of global constraints renders difficulty for us to solve problem (<ref>) in a distributed fashion. However,the primal-dual method, proposed by Chambolle and Pock in <cit.> for solving convex-concave saddle point problems makes it possible. This algorithm can be adapted to solve the multi-agent consensus optimization problem as discussed in <cit.>. We also use the distributed primal-dual algorithm in <cit.> to solve our traffic allocation problem (<ref>). This Appendix aims at developing the distributed algorithm that converges to the solution of (<ref>). The optimization problem (<ref>) can be compactly stated as𝐱maximize∑_s ∈𝒮𝐩_s^T𝐦_ssubject to1_b,l𝐱_b,l^out+δ_b,l𝐱_e_l(b),l^out-c_l≤ 0, l∈ℒ_b, b ∈ℬB𝐱=0,(𝐱_s^out, 𝐦_s, r_s)∈𝒜_s,s∈𝒮,𝐱_b,l^out≽ 0, l∈ℒ_b,b∈ℬ,where 𝒜_s are the set of local constraints for each source node s ∈𝒮, as defined in (<ref>). We introduce the convex-concave saddle-point form of the primal problem (<ref>),𝐱minλ, θmaxL(𝐱, λ, θ), where L(𝐱, λ, θ) is the Lagrangian function given byL(𝐱, λ, θ) = -∑_s ∈𝒮 (𝐩_s^T𝐦_s -I_𝒜_s(𝐱_s^out, 𝐦_s, r_s))+∑_b∈ℬ∑_l∈ℒ_b I_R_+^|ℐ_b,l^out|(𝐱_b,l^out)-∑_b∈ℬ∑_l∈ℒ_b I_R_+(λ_b,l) +∑_b∈ℬ∑_l∈ℒ_b1_b,l𝐱_b,l^out+δ_b,l𝐱_e_l(b),l^out - c_l, λ_b,l+B𝐱, θ.θ∈R^∑_b∈ℬ |I_b| is the vector of dual variables associated with the flow conservation constraint at nodes B𝐱=0. Given l∈ℒ_b and b∈ℬ, the dual variable λ_b,l is introduced for the capacity inequality constrains 1_b,l𝐱_b,l^out+δ_b,l𝐱_e_l(b),l^out≤ c_l. Moreover, λ=[λ_b,l]_l∈ℒ_b,b∈ℬ.Now, given the initial iterates 𝐱^0, λ^0, θ^0 and parameters γ>0, τ_s>0 for all s∈𝒮, τ_i,b,l>0, κ_b,l>0 for all i∈ℐ_b,l^out, l∈ℒ_b and b∈ℬ, we present the following primal-dual iterations to solve (<ref>):𝐱^k+1← 𝐱argmin-∑_s ∈𝒮 (𝐩_s^T𝐦_s-I_𝒜_s(𝐱_s^out, 𝐦_s, r_s))+∑_b∈ℬ∑_l∈ℒ_b I_R_+^|ℐ_b,l^out|(𝐱_b,l^out) +∑_b∈ℬ∑_l∈ℒ_b1_b,l𝐱_b,l^out+δ_b,l𝐱_e_l(b),l^out - c_l, λ_b,l^k+B𝐱, θ^k+∑_b∈ℬ∑_l∈ℒ_b∑_i∈ℐ_b,l^out1/2τ_i,b,l(x_i,b,l^out-x_i,b,l^out,k)^2 + ∑_s ∈𝒮1/2τ_s( (x_s,l^out- x_s,l^out,k)^2 + 𝐦_s-𝐦_s^k_2^2 +(r_s-r_s^k)_2^2 ) ; λ_b, l^k+1← λ_b, largmax -I_R_+(λ_b,l)+⟨1_b,l(2 𝐱_b,l^out, k+1-𝐱_b,l^out, k)+ δ_b,l(2 𝐱_e_l(b),l^out, k+1-𝐱_e_l(b),l^out, k) - c_l, λ_b, l⟩-1/2κ_b, l(λ_b, l-λ_b, l^k)^2,l∈ℒ_b, b∈ℬ; θ^k+1← θargmaxB(2𝐱^k+1-𝐱^k), θ-1/2γθ-θ^k_2^2 =θ^k+γ B (2𝐱^k+1-𝐱^k).Although the convergence to the optimal traffic allocation is guaranteed under the primal-dual method, it is still not a distributed algorithm. In fact, solving the optimization problem involved in the primal variables 𝐱^k+1 update rule requires global information about the network due to the presence of the termB𝐱, θ^k, which is associated with the flow conservation constraints at nodes. Moreover, computing the term 1_b,l𝐱_b,l^out+δ_b,l𝐱_e_l(b),l^out - c_l, λ_b,l^k,l∈ℒ_b, b∈ℬ forces neighboring nodes to exchange information, because bidirectional links are allowed to exist in the model. This fact hinder us from directly implementing the primal-dual iterations. Nevertheless, we exploit the structure of the inner product B𝐱, θ^k and note that this term is a summation of local linear functions of the local variables. In addition, the sending data rates of neighboring nodes is local information. These observations indicates that it is possible to develop an optimal decentralized traffic allocation algorithm. Using recursion in θ update rule in (<ref>), we can write θ^k+1 as a partial summation of previous primal variable 𝐱^k iterations, i.e., θ^k=θ^0+γ∑_n=0^k-1 B (2𝐱^n+1-𝐱^n). Let θ^0 be γ B 𝐱^out,0, 𝐳^0 be 𝐱^0 and 𝐳^k ≜𝐱^k+∑_n=1^k𝐱^n for k≥ 1. Then we getB𝐱, θ^k =γ𝐱^out,B^T B𝐳^k= γ∑_b∈𝒩∑_l ∈ℒ_b∑_i ∈ I_b,l^out x_i, b, l^out( ∑_l̃∈ℒ_b,i^out z_i, b, l̃^k-∑_l̅∈ℒ_b z_i, e_l̅(b), l̅^k). The quadratic operation for updating λ_b,l^k+1 in (<ref>) entails solving the following projection problem:λ_b,l^k+1←𝑃_R_+(λ_b,l^k+ 1_b,l(2 𝐱_b,l^out, k+1-𝐱_b,l^out, k) +δ_b,l(2 𝐱_e_l(b),l^out, k+1-𝐱_e_l(b),l^out, k) - c_l). Substituting (<ref>) and (<ref>) into (<ref>) yields a distributed traffic allocation algorithm shown in Algorithm 1. § APPENDIX C. PROOF OF THEOREM 2In this section, we present the Proof of Theorem 2.Due to space limitations, we only prove that if conditions (<ref>) and (<ref>) hold, the following inequality is true:Q(A,B)≜[ [D_τ -A^T -B^T; -AD_κ0; -B0D_γ ]] ≽ 0where D_κ≜diag([1/κ_b, l]_ l∈ℒ_b, b∈ℬ), D_γ≜1/γ I_∑_b∈𝒩 |I_b|, and D_τ≜diag([v_sτ^T, v_bτ^T]^T ) where v_sτ≜ [1/τ_s1_(|ℒ_s| +ℓ+2)× 1]_s∈𝒮 and v_bτ=[1/τ_i,b,l]_ i∈ℐ_b,l^out, l∈ℒ_b, b∈ℬ. Moreover, A≜diag([A_l]_l∈ℒ_b, b∈ℬ), where A_ l is a row vector with the same dimension as vector variable 𝐱, and the i-th entry of vector A_ l, equals to 1 if the data rate denoted by the i-th element is transported through link l, 0 otherwise. Based on “Schur complement Lemma", we have Q(A,B) ≽ 0 holds if and only if[[D_τ -A^T; -AD_κ ]] -γ[[ B^TB0;00 ]]≽ 0.Moreover, since D_κ≽ 0, again using “Schur complement Lemma", one can conclude that (<ref>) holds if and only ifD_τ- γ B^TB- A^TD_κ^-1A ≽ 0. Denote matrix B^TB by Ω, and we can write Ω into the sum of two matrices, i.e.,Ω=diag([ω_i,b,l]_i ∈ℐ_b,l^out, l∈ℒ_b, b∈𝒩)+E,where ω_i,b,l=1 if node b∈𝒮⋃ℬ_e where ℬ_e is the set of nodes that forward traffic to destination nodes, ω_i,b,l=2 if b∈ℬ⋂ℬ_e^c where ℬ_e^c is the complement of ℬ_e, otherwise 0. Also, all the diagonal elements of matrix E are equal to 0, and thenon-diagonal element E_(i,b_1,l_1),(j,b_2,l_2), corresponding to data rates x_i,b_1,l_1^out∈𝐱 and x_i,b_2,l_2^out∈𝐱, equals to 1 if both data rates belong to the same source node and they are forwarded from the same node, i.e., i=j∈ℐ and b_1=b_2∈𝒩, -1 if both data rates belong to the same source node and nodes b_1 and b_2 are neighboring, and 0 otherwise. Based on “Gershgorin Circle Theorem" <cit.>, we havediag([ω_i,b,l]_ i ∈ℐ_b,l^out, l∈ℒ_b, b∈𝒩)+ diag([d_i,b,l]_ i ∈ℐ_b,l^out, l∈ℒ_b, b∈𝒩)-E ≽ 0,since d_i,b,l is chosen to be large enough. Therefore,Ω≼2diag([ω_i,b,l]_ i ∈ℐ_b,l^out, l∈ℒ_b, b∈𝒩)+ diag([d_i,b,l]_ i ∈ℐ_b,l^out, l∈ℒ_b, b∈𝒩)Moreover,Ω≼diag([4+d_i,b,l]_ i ∈ℐ_b,l^out, l∈ℒ_b, b∈𝒩).Hence, it is sufficient to haveτ- γdiag([4+d_i,b,l]_ i ∈ℐ_b,l^out, l∈ℒ_b, b∈𝒩) - A^TD_κ^-1A ≽ 0 , and this condition holds if the inequalities (<ref>) and (<ref>) in the statement of Theorem 1 are true. Let (𝐱^⋆,λ^⋆,θ^⋆) be an arbitrary saddle-point for the Lagrange function of problem (<ref>), and {𝐱^k}_k≥ 0 be the iterate sequence generated using Algorithm DPDA, initialized from an arbitrary 𝐱^0 and [λ_b,l^0]_l∈ℒ_b,b∈ℬ=0. Denote the average of sending data rates by 𝐱̅^K≜1/K∑_k=1^K 𝐱^k, where K≥ 1. Then, following the proof in <cit.>, we have that {𝐱̅^K} converges to the maximum of the utility function of the problem (<ref>) subject to the resource allocation constraints. In particular, the following error bounds hold for all K≥ 1:θ^⋆B𝐱̅^K+∑_b∈ℬ∑_l∈ℒ_bλ^⋆_b,l h(𝐱̅_b,l^out,𝐱̅_e_l(b),l^out) ≤Θ_1/K,|∑_s∈𝒮𝐩_s^T(𝐦̅_s-𝐦_s^⋆)| ≤Θ_1/K,where h(𝐱̅_b,l^out,𝐱̅_e_l(b),l^out) denotes the distance function d_R_- (1_b,l𝐱̅_b,l^out + δ_b,l𝐱̅_e_l(b),l^out-c_l), and Θ_1≜2/γθ^⋆ ^2-γ/2B𝐱̅^0^2 + ∑_b∈ℬ∑_l∈ℒ_b (∑_i∈ℐ_b,l^out1/2τ_i,b,l(x_i,b,l^out,⋆-x_i,b,l^out,0)^2+ 1/2κ_b,l (λ^⋆_b,l)^2 ) + ∑_s∈𝒮1/2τ_s(𝐦_s^⋆-𝐦_s^0 ^2+(r_s^⋆-r_s^0)^2+ ∑_l∈ℒ_s(x_s,l^out,⋆-x_s,l^out,0)^2 ).§ REFERENCES99 Kelly_JORS_1998 F. P. Frank, A. K. Maulloo and D. K. H. Tan,Rate control for communication networks: shadow prices, proportional fairness and stability, Journal of the Operational Research society, Springer, vol. 49, no. 3, pp. 237–252, 1998. lagoa_2004_N C. M. Lagoa, H. Che and B. A Movsichoff, Adaptive control algorithm for dencentralized optimal traffic engineering in the Internet, IEEE/ACM Transactions onNetworking, vol. 12, no. 3, pp. 415–428, June 2004.lagoa_hop B. A. Movsichoff, A. Bernardo, C. M. Lagoa and H. Che, Decentralized optimal traffic engineering in connectionless networks, IEEE Journal on Selected Areas in Communications, vol. 23, no. 2, pp. 293–303, 2005. beck2013optimalA. Beck, A.  Nedic, A. Ozdaglar and M. Teboulle, Optimal distributed gradient methods for network resource allocation problems, Submitted for publication, 2013. nekouei E. Nekouei, G. Nair and T.  Alpcan, Convergence analysis of quantized primal-dual algorithm in quadratic network utility maximization problems, IEEE Conference on Decision and Control CDC, pp. 2655–2660, 2015.Yin_2015_SC X. Q. Yin, A. Jindal, V. Sekar and B. Sinopoli, A control-theoretic approach for dynamic adaptive video streaming over HTTP, SIGCOMM, London, United Kingdom, pp. 325–338, August 2015.Fazel_CDC_2005 M. Fazel and M. Chiang, Network utility maximization with nonconcave utilities using sum-of-squares method, Proceedings of the 44th IEEE Conference on Decision and Control, 2005, pp. 1867–1874. Hande_Net_2007 P. Hande, S. Y. Zhang and M. Chiang, Distributed rate allocation for inelastic flows, IEEE/ACM Transactions on Networking, vol. 15, no. 6, pp. 1240–1253, 2007.lasserre J. B. Lasserre, Global optimization with polynomials and the problem of moments, SIAM Journal on Optimization, vol. 11, no. 3, pp. 796–817, 2001. laurent M. Laurent, Sums of squares, moment matrices and optimization over polynomials, Emerging applications of algebraic geometry, Springer New York, 2009: 157–270.Aybat_arxiv_2016 N. S. Aybat and E. F. Hamedani, A primal-dual method for conic constrained distributed optimization problems, Advances in Neural Information Processing Systems 29, edited by D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett, Curran Associates, Inc., pp. 5049–5057, 2016, http://papers.nips.cc/paper/6242-a-primal-dual-method-for-conic-constrained-distributed-optimization-problems.pd. ozay N. Ozay, C. M. Lagoa and M. Sznaier, Set membership identification of switched linear systems with known number of subsystems, Automatica, vol. 51, pp. 180–191, 2015. krein M. G. krein and A. A. Nudelman, The markov moment problem and extremal problems, volume 50 of translations of mathematical monographs, American Mathematical Society, Providence, Rhode Island, 1977. Chambolle_mathe_2015 A. Chambolle and T. Pock, On the ergodic convergence rates of a first-order primal-dual algorithm, Mathematical Programming, Springer, 2015, pp. 1–35.Golub:1996:MC:248979 G. H. Golub and C. F. Van Loan, Matrix Computations (3rd Ed.), Johns Hopkins University Press, Baltimore, MD, USA, 1996.
http://arxiv.org/abs/1702.08539v1
{ "authors": [ "Jingyao Wang", "Mahmoud Ashour", "Constantino Lagoa", "Necdet Aybat", "Hao Che", "Zhisheng Duan" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170227213259", "title": "Non-Concave Network Utility Maximization in Connectionless Networks: A Fully Distributed Traffic Allocation Algorithm" }
decorations.pathreplacing arrows,automata,positioning#1#2#1#22mu#1#2modcountgridsearchGlei-chung sto-cha-sti-sche Ge-burts-tags-kind ab-ge-ge-be-nen exi-stie-ren re-pre-sen-tation finanz-markt-aufsicht Modell-un-sicher-heit finanz-markt-risi-ken rung-gal- dier gering-sten 1mmlemmaLemma[section] proposition[lemma]Proposition theorem[lemma]Theorem corollary[lemma]Corollary definition[lemma]Definition example1[lemma]Example rem1[lemma]Remark assumption[lemma]Assumption alg1[lemma]Algorithm me1[lemma]Mechanism remark example me algObligations with Physical Delivery in a Multi-Layered Financial Network[The author would like to thank Eric Schaanning for his help in calibrating the Eisenberg-Noe network model to the European banking dataset in Example <ref>.] Zachary Feinstein[Zachary Feinstein, ESE, Washington University, St. Louis, MO 63130, USA, zfeinstein@ese.wustl.edu.]Washington University in St. Louis December 30, 2023 ===================================================================================================================================================================================================================================== This paper provides a general framework for modeling financial contagion in a system with obligations in multiple illiquid assets (e.g., currencies).In so doing, we develop a multi-layered financial network that extends the single network of <cit.>.In particular, we develop a financial contagion model with fire sales that allows institutions to both buy and sell assets to cover their liabilities in the different assets and act as utility maximizers.We prove that, under standard assumptions and without market impacts, equilibrium portfolio holdings exist and are unique.However, with market impacts, we prove that equilibrium portfolio holdings and market prices exist which clear the multi-layered financial system.In general, though, these clearing solutions are not unique.We extend this result by considering the tâtonnement process to find the unique attained equilibrium.The attained equilibrium need not be continuous with respect to the initial shock; these points of discontinuity match those stresses in which a financial crisis becomes a systemic crisis.We further provide mathematical formulations for payment rules and utility functions satisfying the necessary conditions for these existence and uniqueness results.We demonstrate the value of our model through illustrative numerical case studies.In particular, we study a counterfactual scenario on the event that Greece re-instituted the drachma on a dataset from the European Banking Authority. Key words: Systemic risk; financial contagion; fire sales; financial network; tâtonnement process § INTRODUCTIONAs defined in <cit.>, “financial contagion occurs when the distress of one bank jeopardizes the health of other financial firms.”Many recent works on the topic have focused on modeling aspects of the 2007-2009 financial crisis, as that event proved that systemic crises can have terrible costs.However, such contagious events have occurred at other times in the recent past, e.g., the 1997 Asian financial crisis.In that crisis, among others, currency fluctuations between the US dollar and, e.g., Thai baht and Indonesian rupiah caused the debt-to-income ratios of firms to jump.This caused a positive feedback loop in the currency fluctuations, thus intensifying the contagion. In fact, <cit.> and references therein showed that foreign currency obligations for banks statistically increase the chance of a banking crisis in a nation. However, in contrast to the financial contagion models of <cit.>, many historical financial crisis involved obligations and incomes in multiple currencies (that must be fulfilled in the quoted currency) and illiquidity in the currency markets (see, e.g., <cit.>).That is, in a general sense, many historical crises exist as the outcome of a multi-layered financial network of obligations between financial institutions in multiple illiquid assets insofar as they exhibit three key components: (i) distinct networks of interbank obligations in each currency with (ii) intra-layer connections via payments made in the individual currencies and (iii) inter-layer interactions through asset transfers (and price impacts) between the different currencies and layers of the network.As such, this current paper will focus on an extension of <cit.> to allow for a multi-layered network of obligations, notably allowing for firms to transfer wealth between multiple (illiquid) assets or currencies causing price impacts to the exchange rates.<cit.> propose a network model for the spread of defaults in the financial system.In that proposed model, banks hold liquid assets which are used to pay off liabilities; unpaid liabilities may infect additional firms and cause them to default on some of their liabilities as well.That paper proves conditions for existence and uniqueness of the clearing payments and provides a method for computing the equilibrium clearing payments. This model has been extended to account for time dynamics in, e.g., <cit.>. Additionally, the basic clearing payment model of <cit.> has been relaxed to consider bankruptcy costs (e.g., <cit.>) and cross-holdings (e.g., <cit.>). Illiquid assets and fire sale dynamics have been included in the setting of such network models in, e.g., <cit.> for a single (representative) asset and <cit.> for multiple assets.Empirical studies of the aforementioned financial contagion models have been conducted in, e.g., <cit.>.One of the key contributions of these works is the conclusion that the local connections, via contractual liabilities, do not capture most financial contagion.This motivates our current study for considering the role of illiquid assets and currencies on financial contagion. Measures of systemic risk have been studied in, e.g., <cit.>.The main advance that we wish to study in these models is a more complete picture of how illiquid assets impact systemic risk; in particular, we are concerned with the implications of physical obligations in multiple currencies.Within this scope, prior work has focused on fire sales in which the various financial firms will liquidate their assets in case of a cash shortfall thus driving down the asset value. Further, we will demonstrate that such modeling inherently produces systemic crises due to the market switching the attained equilibria and thus having jumps in the response of banks and the market. However, to the best of our knowledge, none of the prior works in the <cit.> setting permit a multi-layered or interconnected financial network of obligations with liabilities in multiple assets and payments in physical assets rather than in some numéraire.We refer to <cit.> for a discussion of why multi-layered systems are important and can affect financial contagion; this is of particular importance due to the role that currency movements have on systemic risk (<cit.>).In such a model, firms no longer only have obligations in the numéraire asset (cash) only, but in, e.g., multiple currencies or securities.We refer to <cit.> for other approaches to modeling interconnected financial networks. We will tackle this problem, and extend works further, by allowing for solvent firms to invest via a utility maximization problem, thereby permitting such firms to purchase assets at the fire sale price.The systemic risk studied in this paper, insofar as it relates to currency crises, should be viewed as studying extreme events such as the abandonment of a currency peg.If symmetry exists to balance the buying and selling of a currency (due to the notion of “buy low and sell high” as depicted by the wealth maximizing utility in Example <ref>), the exchange rates will generally be very stable as illustrated by low volatility in the foreign exchange markets.However, when an asymmetry exists between initial holdings (in a local currency) and obligations (in a major currency, e.g., US dollars), and with atypical monetary policy, the multilayered network can cause large fluctuations in exchange rates.It is this latter scenario we concern ourselves with in this work.Notably, we will prove only the existence of clearing solutions in markets with price impacts.This is comparable to, e.g., <cit.>, in which the clearing payments are not necessarily unique due to the introduction of fire sales.In order to determine the attained clearing solution, we introduce the use of the tâtonnement process which has previously been used to study financial contagion in <cit.>.Of particular interest, due to the nonuniqueness of the clearing solutions, the attained equilibria need not be continuous with respect to the initial shock.Specifically, this jump implies that a small perturbation in shock can greatly influence the attained clearing solution.This has far reaching consequences for stress testing as discrete stresses may miss this point of discontinuity and thus underestimate the true risk of financial contagion.The organization of this paper is as follows.First, in Section <ref>, we will introduce the mathematical and financial setting.In Section <ref>, we first develop the general modeling framework.We consider markets without price impacts (Section <ref>) followed by markets with price impacts (Section <ref>).We find conditions so that there exists a clearing solution (Section <ref>) and consider the tâtonnoment process to find the attained market equilibrium (Section <ref>). These are the main results of this work.Section <ref> is used to provide specific mathematical examples with meaningful financial interpretations that fit the results of Sections <ref>. Numerical case studies are provided in Section <ref>.In particular, beyond demonstrating the impact of differing choices of payment rules and utility parameters on the equilibrium response in toy models, we provide a numerical case study to consider the impacts on contagion of having a single currency split into two.This is meaningful as it has been threatened in recent years for the Greek economy; in studying the so-called Grexit event, we calibrate the financial system to 2011 stress testing data from the European Banking Authority. The proofs of the theoretical results are provided in the appendix. § THE STYLIZED BALANCE SHEET Consider a financial system with n financial institutions (e.g., banks, hedge funds, or pension plans) and a financial market with m illiquid assets (possibly currencies).We denote by y ∈^n × m_+ the realized portfolio holdings of the institutions and by q ∈^m_++ the prices of the assets in some numéraire.We assume throughout that the price of each asset is bounded away from zero.Throughout this paper we will use the notation a ∧ b:= ([ min(a_11,b_11) min(a_12,b_12)… min(a_1 d_2,b_1 d_2); min(a_21,b_21) min(a_22,b_22)… min(a_2 d_2,b_2 d_2);⋮⋮⋱⋮; min(a_d_1 1,b_d_1 1) min(a_d_1 2,b_d_1 2)… min(a_d_1d_2,b_d_1d_2) ]),a ∨ b:= ([ max(a_11,b_11) max(a_12,b_12)… max(a_1 d_2,b_1 d_2); max(a_21,b_21) max(a_22,b_22)… max(a_2 d_2,b_2 d_2);⋮⋮⋱⋮; max(a_d_1 1,b_d_1 1) max(a_d_1 2,b_d_1 2)… max(a_d_1d_2,b_d_1d_2) ]) where a,b ∈^d_1 × d_2 for some d_1,d_2 ∈.Additionally let a^+ :=a ∨ 0 and a^- := (-a) ∨ 0 where a ∈^d for some d ∈.As described in <cit.>, any financial agent i ∈{1,2,...,n} may be a creditor or obligor to other agents.However, in contrast to <cit.>, we consider these liabilities in multiple currencies that must be fulfilled in the physical assets rather than some numéraire.This distinction, while unimportant in frictionless markets, will be necessary in Section <ref> when considering the impact on prices caused by the transactions undertaken by the firms.Let L_ij^k ≥ 0 be the contractual obligation of firm i towards firm j in asset k.Further, we assume that no firm has an obligation to itself in any asset, i.e., L_ii^k = 0.The total liabilities of agent i in asset k are given byp̅_i^k := ∑_j = 1^n L_ij^k.We can define the vector p̅^k ∈^n_+ as the vector of total obligations of each firm in asset k. On the asset side of the balance sheet, each firm i = 1,2,...,n has an initial endowment of x_i^k ≥ 0 in each k = 1,2,...,m asset.We refer to Figure <ref> for a visual representation of the stylized book value of assets and liabilities for a representative firm with m = 2 assets and with market prices q ∈^m_++.Though firms may alter their borrowing based on market prices due to, e.g., new monetary policy in response to altered exchange rates, modifying the balance sheet in such a way is outside the scope of the current work.We refer to <cit.> for consideration of contingent liabilities in the single (m = 1) asset setting. The relative liabilities of firm i to firm j in asset k, i.e., the fractional amount of total liabilities of firm i towards firm j in asset k, are given by a_ij^k = L_ij^k/p̅_i^k if p̅_i^k > 01/n if p̅_i^k = 0.We define the matrices A^k = (a_ij^k)_i,j = 1,2,...,n with the property (by construction) ∑_j = 1^n a_ij^k = 1 for any i and k.In the case that p̅_i^k = 0 we are able to choose a_ij^k arbitrarily, we let a_ij^k = 1/n in that case so that the summation is equal to 1.Any financial firm may default on their obligations in asset k if they do not hold a sufficient number of that asset.We assume, as per <cit.>, that in case of default the realized payments will be made in proportion to the size of the obligations, i.e., based on the relative liabilities matrix A^k and without prioritization of payments to any firm. That is, the realized value (in physical units) of firm i's interbank assets in asset k is given by∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k]when firm j ≠ i holds y_j^k units of asset k.Encoded in this equation is the notion that if firm j has more assets than liabilities in asset k then it will pay out in full (a_ji^k p̅_j^k = L_ji^k), otherwise it will pay out its holdings proportionally to what it owes.This realized balance sheet is depicted in Figure <ref> with m = 2 assets and with market prices q ∈^m_++.As we consider the setting in which all liabilities must be paid in physical assets, we need to consider an additional step to find the realized holdings for each bank in the system.For instance, Figure <ref> depicts the firm with positive mark-to-market capital, but a deficit in the first asset.Thus, as depicted in Figure <ref>, they would have to transfer some units of the second asset so as to cover this liability.As Figure <ref> considers the frictionless market, the realized capital for the firm before and after the transaction will remain constant, and as such this system is functionally equivalent to (a generalization of) the payment model from <cit.>.However, if price impacts were introduced (see Section <ref>) then more complicated firm behavior needs to be considered and a reduction to mark-to-market values is insufficient to describe the entire system.The details of the firm behavior through a utility maximization problem is provided in the next section. § THE MODELIn this section we will first introduce the clearing framework for multi-currency obligations without price impacts.In this setting we provide results on existence and uniqueness, which generalizes those results from <cit.>.In this case we are able to consider a fictitious default algorithm as was first considered in <cit.>.The framework without price impacts is of interest because it is mathematically tractable.Additionally, from a financial perspective it is of interest due to the generality of the payment schemes provided herein as well as allowing for clear heterogeneous shocks to the various institutions.Under such a setting the use of multi-layered networks is unnecessary as an approach with appropriate prioritization of payments as in <cit.> can be taken instead on the marked-to-market wealth.However, with these results, we introduce price impacts due to the transfer of assets undertaken by the firms.These market impacts cannot be wholly described with only the marked-to-market wealths and thus require the use of vector-valued, i.e. multi-layered, networks.We conclude this section by considering the resultant equilibrium exchange rates achieved after an initial shock to the asset values.This allows us to classify when the system of banks exacerbates, and when it mitigates, the effects of a financial crisis.We wish to compare this model with prior notions of fire sales in the <cit.> framework, e.g. <cit.>.In such works, all obligations are denominated in the same (cash) asset and illiquid assets are sold at a discount in order to cover these cash shortfalls.By taking such an approach, the monotonicity of the clearing mechanism is immediate and Tarski's fixed point provides existence of clearing payments and prices.However, in this work, banks are given freedom to both buy and sell assets so as to cover their obligations (in multiple assets) and to, for instance, purchase assets at a discount so as to increase their utility.The existence of clearing prices and portfolio holdings requires more thorough comparative static results (that are provided in the appendix), and ultimately does not result in a lattice of equilibrium solutions. §.§ Financial contagion without market impacts Fix the behavior of all firms but firm i, i.e., the amount of each asset that all firms but i hold is y_-i∈^(n-1)× m_+ (with firm j holding y_j ∈^m_+), and the relative prices is given by the vector q ∈^m_++.The amount of each asset that firm i has immediately available due to the payments from the other firms is given by(x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k])_k = 1,2,...,m.As described in <cit.>, and depicted in Figure <ref>, firms have available the sum of the endowment x_i^k and the realized interbank assets ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k]. Following the concept of limited liabilities (i.e., no firm pays more than it owes) and absolute priority (i.e., no firm accumulates positive equity until all debts are paid in full), the holdings of firm i are such that (∃ k^* ∈{1,2,...,m}: y_i^k^* > p̅_i^k^*) ⇒ y_i ≥p̅_i.We assume that additional regulatory rules apply to the multi-asset payments.That is, regulators may enforce, e.g., a prioritized payment (as in, e.g., <cit.>) or pro-rata payment (as in, e.g., <cit.>) between different assets or currencies.These rules are encoded in some monotonic, strictly concave, and supermodular payment utility function h_i.The payments made by firm i are given byP_i(y,q) = _p_i ∈ [0,p̅_i]{h_i(p_i;y_-i,q)|∑_k = 1^m q_k p_i^k ≤∑_k = 1^m q_k (x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k]).}.The payment function P_i is defined, given the portfolio holdings of all other firms and the prevailing market price, so that the mark-to-market value of the payments does not exceed the available marked-to-market realized assets.Additionally, by constraining the payments between 0 and p̅_i, we enforce the limited liabilities assumption.The inclusion of the payment utility function h_i is to guarantee that the resultant payments will satisfy the desired regulatory environment (e.g., prioritization or proportionality).We refer to Section <ref> for constructions of the payment utility function under financially interesting regulations.This payment scheme is general enough to cover regulatory environments beyond the standard frameworks in the literature, i.e. proportional and prioritized payments, to include, e.g., a surplus repayment scheme described in Example <ref>.However, firm i may choose to trade more assets than required to make its payments; this additional trading will be done in order to optimize some utility function u_i.To guarantee absolute priority, the final number of assets that firm i holds must exceed, in each asset, the payment P_i(y,q).In this way the utility function is redundant, and unnecessary, for firms that are insolvent as they must cover exactly their payments P_i(y,q).Further, we constrain the actions of each bank so that it can obtain its desired portfolio without loss of mark-to-market valuation from its marked-to-market assets.Thus the vector of asset holdings for firm i is given by the bilevel programy_i ∈ Y_i(y,q) = _e_i ∈^m_+{u_i(e_i;y_-i,q)|[e_i≥P_i(y,q),;∑_k = 1^m q_k e_i^k≤ ∑_k = 1^m q_k (x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^*k]) ].}.By enforcing the non-negativity constraint, we encode a no-short selling assumption.Note that we allow firms to throw away wealth in determining their final portfolio holdings.While mathematically this is possible, all examples considered herein will guarantee that any value in Y_i(y,q) will have terminal (mark-to-market) wealth equal to the value of the firm's assets.With the given rules for repayment and firm behaviors, we are able to fully describe the clearing mechanism for asset holdings.Given an asset holding matrix y ∈^n × m_+ and pricing vector q ∈^m_++ the updated asset holdings are given by the clearing mechanism Y where (Y_i)_i = 1,2,...,n is given in (<ref>).Implicitly within this clearing mechanism, the regulatory agency has a role to play by specifying the payment utility function which determines the payment function P_i (defined in (<ref>)) for each firm i.We use the clearing mechanism to compute the realized holdings y(q) ∈^n × m_+ under the pricing vector q ∈^m_++.This is provided by a fixed point of the clearing mechanism, i.e.,y(q) ∈ Y(y(q),q).We now consider conditions for the existence of maximal and minimal clearing solutions y(q), which is the general property satisfied in the Eisenberg-Noe model, under a crisis price of q.These results are then used to prove a sufficient condition for the uniqueness of the clearing solution by guaranteeing that the maximal and minimal solutions must coincide.Note that, due to the generality of the payment scheme, encoded by the payment utility functions, it is not possible to directly apply the results of <cit.> on the marked-to-market assets and liabilities for each firm; however, in the special case discussed in Remark <ref> below this approach could be taken.Additionally, due to the utility maximizing behavior of the regulators (through the payment utility function) and solvent banks (through the utility functions), we must consider comparative statics of the bilevel optimization problems for each firm to prove the result as provided in the appendix. Fix a price q ∈^m_++. Let the payment utility functions h_i: [0,p̅_i] ×^(n-1) × m_+ ×^m_++→ be strictly increasing, strictly concave, and supermodular in its first argument. Let the utility functions u_i: ^m_+ ×^(n-1) × m_+ ×^m_++→ be concave and supermodular in its first argument.Additionally assume that Y_i(y,q) (defined in (<ref>)) is singleton-valued for any y ∈^n × m and for all agents i.*There exists a greatest and least clearing holdings y^↑(q) ≥ y^↓(q) satisfying y = Y(y,q).*The positive equity of all firms is equal for every fixed point, i.e., (y_i^↑(q) - p̅_i)^+ = (y_i^↓(q) - p̅_i)^+ for every firm i.These results on a greatest and least clearing portfolio holdings generalize Theorem 1 of <cit.>.In fact in the m = 1 asset case, all payment utility functions and utility functions satisfying the conditions of Lemma <ref> (for example setting h_i(p_i;y_-i,q) := p_i^2 and u_i(e_i;y_-i,q) := e_i) recover exactly the Eisenberg-Noe payments and assets as given by Theorem 1 of <cit.>.In the following corollary, we introduce an additional node to the financial system.Denoted as node 0, this “firm” represents all institutions and persons not included in the system of n banks.This notion is developed in more detail in, e.g., <cit.>.In particular, we will assume that this “societal node” acts as a sink to the system, i.e., it has no obligations into the network.This is incorporated in the assumption that the societal node will never default on its obligations as the initial endowments come from outside the original system.If a default of node 0 were desired, this could be included by stressing the initial endowments of the n firms. Consider the setting of Lemma <ref>.If L_i0^k > 0 and L_0i^k = 0 for every firm i and asset k then the equilibrium holdings under price q ∈^m_++ is unique, i.e., y^*(q) := y^↑(q) = y^↓(q).Under specific choices of payment utility and utility functions h_i and u_i satisfying the conditions of Lemma <ref>, we can give weaker conditions for uniqueness.For instance, under proportional transfers (see Example <ref> with μ = 0) with minimal trading (see Example <ref>) if (q^*x,∑_k = 1^m q_k^* L^k) is a regular network in the setting of <cit.> (i.e., all firms have a directed path, possibly of length 0, to a firm with positive endowment, see Definition 5 of <cit.>) then the clearing holdings are unique. We will introduce a modified version of the fictitious default algorithm from <cit.> for the construction of the greatest portfolio holdings y^↑(q) under price q ∈^m_++.In particular, as with the prior fictitious default algorithms, this algorithm will converge after at most n iterations since the set of defaulting banks is monotonic.Though this algorithm converges within the finite number of iterations, it includes a fixed point problem in each iteration as is also the case in, e.g., <cit.>.Consider the setting of Lemma <ref> such that, additionally, h_i(p_i;y_-i,q)= h_i(p_i;p̅_-i∧ y_-i,q)u_i(e_i;y_-i,q)= u_i(e_i;p̅_-i∧ y_-i,q)for every firm i and every p_i ∈ [0,p̅_i], e_i ∈^m_+, y_-i∈^(n-1)× m_+, and q ∈ [q,q]. The greatest portfolio holdings y^↑(q) under price q ∈^m_++ can be found by the following algorithm in at most n iterations of the following. Initialize α = 0, p^α = p̅, and D^α = ∅.Repeat until convergence: * Increment α = α+1;* For any firm i = 1,2,...,n and asset k = 1,2,...,m, define the portfolio holdings by y_ik^α = x_ik + ∑_j = 1^n a_ji^k p_jk^α-1;* Denote the set of insolvent banks by D^α := {i ∈{1,2,...,n}| q^(y_i^α - p̅) < 0};*If D^α = D^α-1 then exit loop;* Define the matrix Λ^α∈{0,1}^n × n so that Λ_ij^α = 1ifi = j ∈ D^α0else. p^α = p̂ is the maximal solution to the following fixed point problemp̂ = (I - Λ^α)p̅ + Λ^α P((I - Λ^α)p̅ + Λ^αp̂,q). After terminating the loop the clearing holdings can be computed by y = Y(p^α,q).The additional condition required for Algorithm <ref> for the payment utility and utility functions states firm i determines how much it pays or holds based only on the payments of the other firms p̅_-i∧ y_-i and not on the actualized holdings of the other firms y_-i. We will finish our discussion of the equilibrium portfolio holdings without price impacts by considering a simple two bank example for which the clearing solution can be computed analytically.We will refer back to this example at the end of the section on price impacts and attained equilibria as well. Consider the network with n = 2 banks and m = 2 assets depicted in Figure <ref>.That is, the first institution holds 2 units of the second asset and owes 1 in the first asset to the second institution; vice versa the second institution holds 2 units of the first assets and owes 1 in the second asset to the first institution.Note that any choice of (h_i)_i = 1,2 satisfying the conditions of Lemma <ref> is equivalent in equilibrium since, for both banks, all obligations are only in single assets.Consider a utility function u_i that minimizes the total amount of trading in the market (see Example <ref> below for more details).Without loss of generality we will let asset 1 denote the numéraire asset (i.e., q_1 = 1 throughout this example).As discussed in Remark <ref>, for any price q_2 > 0, this system will have a unique clearing solution given by:y_1^*(q)= ([min(1,3q_2); (2 + min(1,3/q_2) - 1/q_2)^+ ])andy_2^*(q) = ([ (2 + min(1,3q_2) - q_2)^+;min(1,3/q_2) ]).§.§ Financial contagion with market impactsThe results on the clearing portfolio holdings without market impacts generalize the results of <cit.>.In fact in the m = 1 asset case, all payment utility functions and utility functions satisfying the conditions of Lemma <ref> (for example setting h_i(p_i;y_-i,q) := p_i^2 and u_i(e_i;y_-i,q) := e_i) recover exactly the Eisenberg-Noe payments and assets as given by Theorem 1 of <cit.> if there are no market impacts.However, market impacts (due to asset transfers undertaken by the firms) introduce further feedback effects on the firms and cannot be considered in the m = 1 scheme from <cit.>. In this section, we will first introduce the inverse demand functions, which we use to model the market impacts of firm behavior.We will then use this model of market impacts to consider the existence of clearing prices and portfolio holdings, thus generalizing the prior section.The price of the assets is given by a vector valued inverse demand function F: ^m → [q,q] ⊆^m_+ for minimum and maximum prices q = (1 , q_2 , … , q_m)^ and q = (1 , q_2 , … , q_m)^ where the first asset is the numéraire.The inverse demand function maps the quantity of each asset to be sold into a price per unit in the numéraire.The liquidation value, in the numéraire, of the portfolio z ∈^m is thus given by z^ F(z).We will impose the following assumption for the remainder of this paper.The inverse demand function F: ^m → [q,q] ⊆^m_++ is continuous and nonincreasing. For simplicity, the inverse demand function has the form F(z) := (1 , f_2(z_2) , … , f_m(z_m))^ for every z ∈^m. In the construction of the inverse demand function, we choose to take the first asset to be the numéraire asset.However, this assumption need not be made for the results of this work.In fact, a fictitious numéraire asset can be chosen instead and thus we would choose q_1 < q_1 and some function f_1(z_1) would be necessary as well.Further, the assumption that no cross impacts exist in the pricing can be eliminated without affecting the results of this work so long as the inverse demand function is continuous and nonincreasing.Rather than introduce a numéraire, we can substitute the bid-ask matrix (see, e.g., <cit.>) with components π_k_1k_2(z) := F_k_2(z)/F_k_1(z) for the inverse demand function.Equivalently, from the bid-ask matrix Π: ^m →^m × m_++, we can construct a set of inverse demand functions such that the first asset is the numéraire asset by defining F_k(z) := π_1k(z). Similarly, rather than introduce the inverse demand function, we could consider the demand curve for the nonbanking sector (as done in, e.g., <cit.>).We consider the inverse demand function since it simplifies the formulations of this paper, though it can be constructed from the demand curve of the nonbanking sector.Additionally, though not needed for the results of this paper, we will generally assume that the inverse demand function satisfies the condition that z ∈^m ↦ z^ F(z) is a strictly increasing mapping.That is, the liquidation value of a portfolio is strictly increasing as portfolio holdings get larger.Note that we consider this for portfolios with short positions, i.e., z_k < 0, as well.In the case when there are m = 2 assets, following Assumption <ref>, we will assume throughout that F_1 ≡ 1 and F_2(z) := f_2(z_2) for every z ∈^2 for some continuous and nonincreasing inverse demand function f_2: → [q_2,q_2].That is, the first asset will act as the numéraire asset and the price of the second asset will depend only on the number of units being bought or sold in that asset. In prior works, e.g. <cit.>, the inverse demand function is only defined as being a function of non-negative units being sold; using a symmetric argument, we can define an inverse demand function on the entire real line from this half-line inverse demand function.Consider f̂: _+ → [q_2,1] to be a continuous and nonincreasing inverse demand function such that α(z_2) := z_2 f̂(z_2) is strictly increasing in z_2 ∈_+.Then we can define the full inverse demand function in a symmetric way asF_2(z) := f̂(z_2)ifz_2 ≥ 0 1/f̂(α^-1(-z_2)) ifz_2 < 0.The notion of symmetry is due to the fact that selling z_2 units of the second asset is equivalent to purchasing α(z_2) units of the first asset (i.e., selling -α(z_2) units).Thus, when purchasing |z_2| units of asset 2, for z_2 < 0, we can consider selling α^-1(-z_2) units of the first asset.With the added assumption that the first asset has the same inverse demand function f̂ (when selling units of asset 1 denominated in the second asset), and changing numéraire back to asset 1, results in the inverse demand function as presented in (<ref>).With the model of price impacts, given by the inverse demand function F, we want to return again to the clearing model for firm portfolio holdings.Given an initial price q_0 ∈ [q,q], Section <ref> provides the firm behavior y^*(q_0).However, if these sales are actualized, this leads to an updated price q due to the market impact of the transfers undertaken by each firm.In particular, the updated prices are a function of the net difference between what is initially available to each firm and the final holdings.The initial shock q_0 ∈ [q,q] would be generated by actions from agents outside our system.That is, initially some quantity γ_0 ∈^m is transacted so that q_0 = F(γ_0).The clearing prices, subject to the initial shock q_0, are thus given byq= F(γ_0 + ∑_i = 1^n (x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^*k(q)] - y_i^*k(q))_k = 1,2,...,m)= F(γ_0 + ∑_i = 1^n x_i + ∑_j = 1^n ([∑_i = 1^n a_ji^k] [p̅_j^k ∧ y_j^*k(q)])_k = 1,2,...,m - ∑_i = 1^n y_i^*(q))= F(γ_0 + ∑_i = 1^n (x_i + [p̅_i ∧ y_i^*(q)] - y_i^*(q))).With the feedback effects from the inverse demand function, there is the potential for increased contagion than the initial shock propagating through the no market impact case of Section <ref> or the single asset setting of <cit.>.However, if firms choose to purchase a distressed asset due to the decrease in price, it is possible that a mitigating feedback loop is instituted that will ultimately cause fewer defaults and improved exchange rates compared to the initial shock q_0.For the ease of utilizing these results, we will now provide a summary of all assumptions for Corollary <ref>.These are exactly those from Corollary <ref>, Assumption <ref>, and assuming both types of utility functions are jointly continuous.Let the network be such that all firms have obligations to a societal node and no obligations from such a node in each asset, i.e., L_i0^k > 0 and L_0i^k = 0 for every firm i and asset k. Let the inverse demand function F: ^m → [q,q] satisfy Assumption <ref>, i.e., be continuous and nonincreasing. Let the payment utility functions h_i: [0,p̅_i] ×^(n-1) × m_+ ×^m_++→ be strictly increasing, strictly concave, and supermodular in its first argument and jointly continuous for every bank i. Let the utility functions u_i: ^m_+ ×^(n-1) × m_+ ×^m_++→ be concave and supermodular in its first argument and jointly continuous for every bank i.Additionally assume that Y_i(y,q) (defined in (<ref>)) is singleton-valued for any y ∈^n × m and for all agents i.Let Assumption <ref> hold. Let γ_0 ∈^m be an initial set of transactions that result in a price shock q_0 = F(γ_0) ∈ [q,q]. There exists a fixed point priceq^* = F(γ_0 + ∑_i = 1^n (x_i + [p̅_i ∧ y_i^*(q^*)] - y_i^*(q^*)))and resultant portfolio holdings y^*(q^*).In comparison to prior works in the Eisenberg-Noe framework, the existence of the clearing prices does not follow from a monotonicity argument with Tarski's fixed point, but rather from Brouwer's fixed point theorem (as detailed in the appendix). Assumption <ref> can be weakened and still guarantee existence of joint clearing portfolio holdings and prices.In fact, there exists joint clearing holdings and prices so long as: * the payment utility functions h_i are jointly continuous and both strictly increasing and strictly quasi-concave in their first argument, * the utility functions u_i are jointly continuous and quasi-concave in their first argument, and * the inverse demand function F satisfies Assumption <ref>.This can be proven using an iterated application of the Berge Maximum Theorem (see, e.g., Theorem 17.31 in <cit.>) for P_i and Y_i followed by an application of the Kakutani Fixed Point Theorem (see, e.g., Theorem 3.2.3 in <cit.>) to attain the existence of a fixed point (y^*,q^*).This statement is formalized and proven in Theorem 3.2 of an older preprint version of this text available at <https://arxiv.org/pdf/1702.07936v2.pdf>.In fact, using the logic of that proof, we can allow for continuous admissible valuation functions (as defined in <cit.>) 𝕍_i^k(y_i^k/p̅_i^k) ∈ [0,1] with payments L_ij^k𝕍_i^k(y_i^k/p̅_i^k) from firm i to j in asset k.In this paper, we exclusively consider the continuous admissible valuation function 𝕍^EN(z) = 1 ∧ z^+ for all firms and all assets.We refer to <cit.> for further discussions on admissible valuation functions and the relation to bank distress and bankruptcy costs.For simplicity, we focus on the stronger assumptions introduced in Assumption <ref> for the remainder of this paper.Note that we allow (and likely enforce) firms to both buy and sell assets during a fire sale, this is in contrast to earlier works such as <cit.>.Such an approach is necessary to consider the cross-currency obligations exhibited in many systemic crises (e.g., <cit.>).In such a setting, firms will transfer between the currencies or, more generally, assets in order to satisfy the different obligations.That is, for instance, a firm in the United States may need to sell US dollars for euros in order to fulfill European liabilities, while a European firm may enact the reverse transaction within the same international financial system.Further, we allow for solvent firms to use their excess wealth in order to maximize their utility.By allowing this, the contagion effects of a fire sale could be partially mitigated if firms purchase an asset in a fire sale (or sell an asset being bought in excess).The extreme mitigation in which the system has no net changes in currency holdings (i.e., all sales by one firm are purchased by a separate firm in the system) follows the model of <cit.> with no illiquidity; this extreme mitigation occurs when there is a symmetry in endowments and obligations between the assets or currencies.In contrast, the fire sales can have a large impact on the health of the various firms when there is an asymmetry in the system (e.g., between US dollars and Thai baht during the 1997 Asian financial crisis as discussed in <cit.>). §.§ Attained equilibrium in m = 2 asset caseIn general, uniqueness of the clearing holdings and prices is not guaranteed.We refer to Example <ref> below, which provides an illustration of multiple clearing solutions under the setting of Example <ref>, i.e., a simple two asset network.However, though there may exist more than one clearing solution, the system can only attain a single equilibrium.This section will focus on the tâtonnement process by which an equilibrium is attained after the initial shock occurs in the m = 2 asset scenario.Consider an initial price shock q_0 ∈ [q,q] generated by asset transfers γ_0 ∈^m.Initially the firms would want to reach y^*(q_0), but as they implement these transactions they will impact the prices in a continuous way.In particular, the prices will update along the direction of difference between the “desired” price and the current price, i.e., beginning from q_0dq_t = [F(γ_0 + ∑_i = 1^n (x_i + [p̅_i ∧ y_i^*(q_t)] - y_i^*(q_t))) - q_t] dt.The attained clearing solution is exactly the asymptotic solution of this process, if it exists.This procedure is often called the tâtonnement process in the economics literature (see, e.g., <cit.>).We wish to emphasize that although the set of clearing solutions given a shock q_0 may not be a singleton, (assuming convergence) the tâtonnement process can only reach a single clearing solution.Importantly, because the set of clearing solutions is not unique, it will frequently be the case that the attained clearing solutions (as a function of the initial shock q_0) will be discontinuous.These points of discontinuity match exactly those stresses in which a financial crisis becomes a systemic crises.That is, if a marginal change in initial shock can cause a radically different clearing solution to be attained.We refer the reader to Example <ref>, and in particular to Figure <ref>, for a demonstration of such an event.This discontinuity is fundamentally a result of the nonuniqueness of equilibria.As an initial stress grows too large, the system may not be able to sustain a financially stabilizing equilibrium anymore as firms enter insolvency and, therefore, a jump in equilibria occurs to a more extreme outcome.Such events are systemic as they are purely a result of firm insolvency that cannot be captured by looking solely at the aggregate of the financial system. Consider the setting of Assumption <ref> with only m = 2 assets.The tâtonnement process (<ref>) will converge to a clearing solution. This tâtonnement process, ultimately, provides the attained clearing solution which includes market impacts from the individual firm behaviors.As mentioned previously, these market impacts can have either mitigating or exacerbating effects which cannot be captured by the Eisenberg-Noe framework with exogenous shocks only.As we will investigate in numerical case studies in Section <ref>, the choice of regulatory framework and utility functions will affect the clearing solutions.As a rule of thumb, and as expected, small shocks may be “absorbed” by the system, while large shocks are likely to be exacerbated and may potentially drive the price to the upper or lower bound. Consider the network with n = 2 banks and m = 2 assets depicted in Figure <ref> with parameters considered in Example <ref>.Consider linear price impacts on the second asset, i.e., F_2(z) = q_2 ∨ (1 - bz_2) ∧q_2 for some lower bound q_2 < 1/3, upper bound q_2 > 3 on the prices, and price impact b ∈ (0,1).The set of clearing prices, as a function of the initial shock size q_0 ∈ [q,q], are given byQ^*(q_0)= {q^0(q_0)} if q_2 ≤ q_0,2 < -b + 2√(b) {q^0(q_0) , q^↓(q_0) , q^↑(q_0)} if-b + 2√(b)≤ q_0,2≤ 2b + 1/3 {q^↑(q_0)} if2b + 1/3 < q_0,2 < 3-2/3b {q^1(q_0)} if3-2/3b ≤ q_0,2≤q_2where the candidate clearing prices are provided byq^↑(q_0)= (1,1/2[q_0,2 + b + √(q_0,2^2 + 2b(q_0,2-2) + b^2)])^q^↓(q_0)= (1,1/2[q_0,2 + b - √(q_0,2^2 + 2b(q_0,2-2) + b^2)])^q^0(q_0)= (1,[q_0,2 - 2b] ∨q_2)^q^1(q_0)= (1,1/2[q_0,2 + √(q_0,2^2 + 8b)] ∧q_2)^.We wish to note that q^↑((1,3-2/3b)^) = q^1((1,3-2/3b)^), so only one clearing solution exists at q_0,2 = 3-2/3b.Though multiple clearing prices exist for q_0,2∈ [-b + 2√(b),2b + 1/3], a unique price is attained using the tâtonnement process.This selector from the set of all clearing prices can be determined as a function of the initial shock q_0 ∈ [q,q] to be given byq^*(q_0) =q^0(q_0)if q_2 ≤ q_0,2 < -b + 2√(b) q^↑(q_0)if-b + 2√(b)≤ q_0,2 < 3-2/3b q^1(q_0)if3-2/3b ≤ q_0,2≤q_2 .Of particular note, q^* is discontinuous at q_0,2 = -b + 2√(b) in general.This provides the important notion that the system can, roughly, absorb a shock of size -b + 2√(b) (with some exacerbating tendencies), but any shock larger than that will cause a near complete collapse of the system.We refer the reader to Figure <ref> which displays the set of clearing prices Q^* and the attained clearing price q^* as functions of the initial shock q_0 where q_2 = 0.05, q_2 = 5, and b = 3/8.Both the full region of responses and a consideration of only downward shocks in the second asset price are presented.Notably, at q_0,2 = -b + 2√(b)≈ 0.85 the attained equilibrium price drops from roughly 0.6125 to 0.10.As such, this simple system can be viewed as being able to withstand a 15% drop in asset prices, but no more.§ EXAMPLE PAYMENT UTILITY AND UTILITY FUNCTIONS In this section we present two possible choices for the payment utility function h_i and three choices for the utility functions u_i which satisfy Assumption <ref> and the additional conditions of Algorithm <ref>.For the payment utility function h_i we present quadratic formulations that correspond to: * Surplus transfers: a firm only transfers from one asset to another if there is a surplus to exchange.* Prioritization with proportional payments: a firm transfers all wealth to asset 1 first until that obligation is paid off in full, then attempting to fulfill obligations in the second asset, and so on through the μ^th asset, then attempting to fulfill all other obligations paying out in proportion to the total liabilities. As special cases, this setting includes both asset prioritization (i.e., all assets are ordered and given a strict seniority structure for repayment) and proportional payments (i.e., the amount of obligations filled follows the same proportion as the total liabilities).For the utility function u_i we present three options with clear meaning: * Minimal trading: a utility function such that firms choose to see how markets respond in the immediate aftermath of the crisis in order to determine their investment response, i.e., firms seek to minimize the total amount of trading between assets (once the rules to find the payments P_i are taken into account).* Asset maximizing: a utility function encoding a flight-to-quality which seeks to maximize the total number of units of a specific asset, at the expense of all other assets.* Value maximizing: a utility function which is given by the value of the final portfolio holdings of the firm since firms typically trade assets in order to maximize return on equity.In particular, this utility function attempts to maximize the total pre-crisis wealth for a firm. §.§ Sample payment utility functionsHere we will consider the details of two possible, meaningful, options for the choice of payment utility functions h_i in (<ref>).These are a surplus transfer rule and a prioritization of the first μ assets and proportional payments for the remainder rule.Both of these sample payment utility functions satisfy Assumption <ref> and the conditions of Algorithm <ref>. Consider a regulatory framework in which a firm is only forced to transfer assets if there is a surplus that is not being used to cover obligations already. In an international financial system, such a regulatory framework would naturally exist if each (independent) regulatory body places priority on its own currency.Any institution operating in multiple nations would be forced to follow the local regulations with their locally held endowments.However, once a firm has satisfied all obligations in a currency, the regulatory requirements therein have been satisfied and they may exchange the surplus to any other currency still in deficit. One possibility to describe this framework is represented mathematically by the quadratic payment utility function h_i: [0,p̅_i] ×^(n-1)× m_+ × [q,q] → defined by:h_i(p_i;y_-i,q):= -1/2(c_i - p_i)^([q_1/c_i^1-e_i^1,q_2/c_i^2-e_i^2,…,q_m/c_i^m-e_i^m]^) (c_i - p_i)c_i= p̅_i ∨ e_i + δe_i^k= x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k](k = 1,2,...,m).The δ∈^m_++ term that appears is to shift the center of the ellipsoidal level sets of h_i rightward and upward from the maximum between the amount owed p̅_i and the amount held (pre-transfers) through market clearing e_i.The δ is introduced solely to avoid a division by 0 in this representation of the surplus payment utility function.In fact, this payment utility function is chosen such that the level sets are ellipsoids with center above both p̅_i and e_i, and such that the gradient of h_i is q at e_i. Consider a regulatory framework in which a prioritization scheme is applied to the first μ∈{0,1,...,m} assets, and all other assets are treated in equal proportion after those first μ assets are paid in full.In the special cases that μ = 0 this is a purely proportional payments regulation scheme, i.e., pro-rata.In the case that μ = m this is a purely prioritized payments regulation scheme, i.e., a seniority structure as in <cit.>.This may arise if asset 1 is the local currency due to regulations favoring those payments.Financial institutions will pay off their balance in asset 1 (including by transferring funds from all other assets), and only after that obligation is fulfilled will they begin filling asset 2 and so on down the line until they pay off asset μ.Only after asset μ is paid off in full will the other obligations be paid, which will be done in proportion to the obligations for assets μ+1 through m.Mathematically we will define the payment utility function h_i^μ: [0,p̅_i] ×^(n-1) × m_+ × [q,q] → byh_i^μ(p_i;y_-i,q):= -1/2(c_i - p_i)^([q_1/c_i^1 - s_i^1 , … , q_μ/c_i^μ - s_i^μ ,q_μ + 1/c_i^μ + 1 - πp̅_i^μ + 1 , … , q_m/c_i^m - πp̅_i^m]^) (c_i - p_i)c_i= p̅_i + δs_i^k= p̅_i^k ∧(∑_j = 1^m q_j e_i^j - ∑_j = 1^k-1 q_j s_i^j)^+/q_k (k = 1,2,...,μ) π = [(∑_k = 1^m q_k p̅_i^k) ∧(∑_k = 1^m q_k e_i^k)] - ∑_k = 1^μ q_k s_i^k/∑_k = 1^m q_k p̅_i^k - ∑_k = 1^μ q_k s_i^ke_i^k= x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k](k = 1,2,...,m).As with the surplus payment utility function, we choose the quadratic form to create ellipsoidal level sets with center c_i (above p̅_i), and such that the gradient of h_i is q at the point on the feasible line where all wealth is in asset 1 if less than p̅_i^1, or p̅_i^1 wealth is in asset 1, and so on until asset μ, and the remaining assets are along the proportionality line.§.§ Sample utility functionsHere we will consider the details of three possible, meaningful, options for the choice of utility functions u_i in (<ref>).These are the utility function that leads to minimizing the total size of transfers, the utility function that prioritizes holding a specific asset, and the utility function given by the pre-fire sale priced final wealth of the firm.All three of these sample utility functions satisfy Assumption <ref> and the conditions of Algorithm <ref>. Consider the case where firms wish to make the smallest possible trades in order to meet their obligations, and trade no more once that occurs. Such a setting may be appropriate when a firm is concerned about uncertainty in the rightful exchange rates.With such uncertainty a firm may choose to minimize their own impact and wait for the market response to take shape before responding. Essentially, this is a “wait and see” approach to investing during a crisis.Firms would then choose to rebalance over time as prices fluctuate after the crisis studied in this work.Due to the static nature of the model studied in this work, this allows us to capture the immediate aftermath of a crisis but not the long term effects that may be felt. We can define the utility function for (<ref>) by u_i(e_i;y_-i,q) := -(x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k])_k - e_i_2^2. That is, the holdings for firm i after trading would be the closest feasible point (based on the Euclidean norm) to the initial network model before trading occurs. In particular, by definition of the norm, u_i is jointly continuous, strictly concave, and supermodular in its first component. Consider the case where a firm may wish to maximize their holdings in a specific asset k^* ∈{1,2,...,m} at the expense of all other assets. In a systemic crises, firms may choose to sell higher risk assets or currencies in order to purchase safer ones in a flight-to-quality.This has occurred in practice during currency crises such as the 1997 Asian financial crisis when firms bought US dollars due to the collapse of local currencies despite the market moving against them in such transactions. This can be modeled by a firm wishing to maximize the holdings in the safe asset at the expense of all others.We can define the utility function for (<ref>) byu_i(e_i;y_-i,q) := e_i^k^* - p̅_i^k^*. That is, firm i will solely seek to maximize their holdings in asset k^*, without consideration of any other assets.In particular, this is trivially jointly continuous, concave, and supermodular in its first component.Consider the case where firms wish to maximize their own net worth (in the numéraire) given the pre-fire sale prices. Such a setting is appropriate when a firm has the belief that the pre-fire sale prices are the “true” value of the assets.In such a view, any change from this price is due to the current crisis, but will rebound to the pre-fire sale prices after the crisis is over.Thus a firm would wish to purchase assets at a discount (or sell at a premium) in order to obtain a good deal. In this case we seek to maximize u_i(e_i;y_-i,q) := (e_i-p̅_i)^ F(0),which is jointly continuous, concave, and supermodular on the domain of interest.Additionally, under the condition that P_i is a singleton (as in the assumptions of Lemma <ref>), we can recover the resultant utility maximizer Y_i(y,q) is unique so long as q ≠λ F(0) for every λ∈_++.Compare this utility function to the welfare maximizing utility, i.e., when a regulator wishes to maximize the welfare of the aggregate system of financial institutions as measured by ∑_i = 1^n u_i(e_i;y_-i,q) over all firm holdings e_i simultaneously.With this utility function, since firms are purchasing asset k if q_k < F_k(0) and selling if q_k > F_k(0), the Nash equilibrium clearing prices will correspond with the welfare maximizing clearing prices since firm behavior will be identical under both considerations.§ NUMERICAL CASE STUDIES In this section we will consider two numerical implementations of the financial contagion framework considered in the prior sections.The first study is a toy implementation of the example payment utility and utility functions presented in Section <ref> to demonstrate how they affect the attained equilibrium prices.We will then implement a brief study of the European financial system, calibrated with data from the European Banking Authority, to compare the equilibrium solution under a single currency (as in <cit.>) to the counterfactual under which the Greek drachma were reinstated during an actualized Grexit event. For a first illustrative example, consider a network with two currencies.As assumed throughout this work, let the first currency act as the numéraire asset, i.e., F_1 ≡ 1.To model the market impacts, let the inverse demand function for the second currency be given byF_2(z)= f̂(z_2)ifz_2 ≥ 0 1/f̂(α^-1(-z_2)) ifz_2 < 0 for f̂(z) = 3tan^-1(-z) + 2π/2πwhere α(z_2) := z_2 f̂(z_2) is the number of units of the first currency being purchased when z_2 ∈ units of the second currency are being sold.See Remark <ref> for a discussion of the symmetry argument inherent in this choice of inverse demand function.We will consider a system with 20 firms and a societal node, as introduced in Corollary <ref>.As this is an illustrative example only, we will consider a single realization of a random financial network.In both currencies, independently, each pair of firms has a 25% probability of having a connection of size 1.Additionally, every firm owes 1 of both currencies to the external node, which owes nothing back into the system.All firms begin with i.i.d. random endowments uniformly chosen between 0 and 20, which is then split evenly between the two currencies. For demonstration purposes to show how the different regulation schemes and utility functions alter the equilibrium of the system as the initial shock q_0 varies, we consider the surplus, priority, and proportional regulation schemes (Examples <ref> and <ref> with μ = 2 and μ = 0 respectively) under the minimum trading utility function (Example <ref>) and value maximizing utility function (Example <ref>).In each scenario the societal node will follow the minimum trading utility function. Additionally, for illustration of the set of clearing solutions, we will also show all equilibrium prices without any initial shock, i.e. q_0 = F(0) = (1,1)^, for the different regulation schemes and utility functions. Figure <ref> displays the prices attained from the tâtonnement process q^*_2(q_0) given an initial price of q_0 = (1,q_0,2)^ both with and without market impacts.We note that the attained process need not be continuous in the initial price q_0.Note that only a single curve for the value maximizing utility is shown as all three payment utility schemes produce virtually indistinguishable curves under that utility.Further note that under the value maximizing utility the unique equilibrium price is given by the unshocked price F(0) = (1,1)^ for nearly any initial price q_0 and mitigates the shock for any initial price.Additionally, the value maximizing utility produces a continuous equilibrium response as a function of the shocked price q_0. We would also like to point out that the attained clearing prices need not be continuous (as also demonstrated in Example <ref>).It appears that all three regulatory environments jump equilibria at low values of q_0,2.Specifically, under the minimal trading utility function, the surplus payment utility function jumps equilibria at q_0,2^s ≈ 0.605 from q_2^*(q_0^s+ϵ) ≈ 2.695 to q_2^*(q_0^s-ϵ) ≈ 0.364.Similarly, the priority payment utility function jumps equilibria at q_0,2^μ=2≈ 1.413 from q_2^*(q_0^μ=2+ϵ) ≈ 2.956 to q_2^*(q_0^μ=2-ϵ) ≈ 0.285.Finally, the proportional payment utility function jumps equilibria at q_0,2^μ=0≈ 0.875 from q_2^*(q_0^μ=0+ϵ) ≈ 3.001 to q_2^*(q_0^μ=0-ϵ) ≈ 0.331. For demonstrative purposes, we display the set of all clearing prices and defaulting banks under the four scenarios in Table <ref> under no initial shock, i.e., q_0 = F(0).We would like to point out that the priority regulatory scheme, under the minimum trading utility function, results in a unique equilibrium in this setting, with the equilibrium price being near the lower bound q_2 = 1/4.This is due to the forced liquidation of currency 2 for currency 1, which creates a significant asymmetry in the trading not present in, e.g., the proportional regulation scheme.We further note that the surplus regulation scheme, though symmetric in construction, is asymmetric in equilibrium.This demonstrates the concept that the particular realization of the network plays a significant role in directing symmetry (i.e., higher liabilities or lower assets in one currency will skew the equilibrium results).Finally, the proportional regulation scheme results in multiple clearing solutions thus providing a counterexample to uniqueness of the joint clearing holdings and prices.Notably, choosing different regulatory and utility functions causes different firms to default in equilibrium.Let us now consider an example calibrated from data.We will calibrate a network model to the 2011 European banking dataset from EBA that has been used in prior studies (e.g., <cit.>) under the financial contagion framework of <cit.>.Though we utilize this dataset to have a more realistic network, the approach for calibration still requires heuristics, as such this example is still for demonstrative purposes only.Due to complications with the calibration methodology, we only consider 87 of the 90 institutions. DE029, LU45, and SI058 were not included in this analysis.As a stylized bank balance sheet, we will consider two categories of assets: interbank assets ∑_j = 1^n L_ji and endowments x_i.We will additionally consider three categories of liabilities: interbank liabilities ∑_j = 1^n L_ij, external liabilities L_i0, and capital c_i.First, we will briefly discuss how to calibrate the <cit.> model, i.e., when all values are denominated in the numéraire asset only.The EBA dataset provides information on the total assets T_i, capital c_i, and interbank liabilities ∑_j = 1^n L_ij.To determine the variables necessary for the <cit.> model we will assume, as in <cit.>, that the interbank liabilities equal interbank assets ∑_j = 1^n L_ij = ∑_j = 1^n L_ji. We will, however, modify this condition slightly as discussed in <cit.>; we will perturb the interbank assets a small amount to satisfy a technical condition of <cit.>. Additionally, we assume that all assets not a part of the interbank assets are endowments and all liabilities not capital or owed to other banks are owed to the societal node 0.Under these assumptions, given the provided values, we determine the remainder of the stylized balance sheet viax_i:= T_i - ∑_j = 1^n L_ij,L_i0 := T_i - ∑_j = 1^n L_ij - c_i,and p̅_i := L_i0 + ∑_j = 1^n L_ij.Under this calibration, the net worth of firm i is equal to its capital, i.e., c_i = T_i - p̅_i.In order to complete the <cit.> system, we need the full nominal liabilities matrix L.This, however, is not provided in the EBA dataset.Thus we will utilize the methodology of <cit.> in order to estimate one such matrix consistent with the asset and liability data discussed above.We consider a single realization of the nominal liabilities matrix L given the algorithm of <cit.> with parameters p = 0.5, thinning = 10^4, n_burn-in = 10^9, and λ = p n (n-1)/∑_i = 1^n ∑_j = 1^n L_ij≈ 0.00122. First, as a baseline model, we run the financial contagion model of <cit.> to determine the “factual” response in the scenario that Greece remains in the Eurozone and thus only a single currency is utilized.In this scenario, assuming no external stresses, we find that none of the 87 banks would default on its obligations; this comports with reality since none of the firms failed in late 2011.Additionally, as this model only considers a single asset, there are no fire sales evidenced either. Now, we wish to consider the counterfactual scenario in which Greece were not a member of the Eurozone and had its own currency, the drachma, once more (i.e., the Grexit scenario).In order to update the calibration to include both the euro and drachma, we need to consider alsothe total (non-sovereign) exposures that each bank has to Greece GE_i.Sovereign exposures to Greece were orders of magnitude smaller and thus their inclusion would not have significantly affected the final model.For notational simplicity let N = {1,2,...,87} be the set of all banks and G ⊆ N be the set of the six Greek banks in the EBA dataset.Then using the calibrated assets and liabilities to the <cit.> framework (henceforth denoted x^EN and L^EN) we update the assets and liabilities to bex_i^1:= x_i^EN - GE_i,x_i^2:= GE_i∀i ∈ N \ Gx_i^1:= 0,x_i^2:= x_i^EN ∀i ∈ GL_ij^1:= L_ij^EN,L_ij^2:= 0∀i ∈ N \ G∀ j ∈ N ∪{0}L_ij^1:= L_ij^EN,L_ij^2:= 0∀i ∈ N∀ j ∈ N \ GL_ij^1:= 0,L_ij^2:= L_ij^EN ∀i ∈ G∀ j ∈ G ∪{0}.where the first asset is the euro and the second is the drachma.That is, the assets of the non-Greek banks are denominated in drachmas for the amount that was exposed to Greece and the rest remains denominated in euros.In contrast, all endowments held by Greek banks are re-denominated in the drachma.Additionally, obligations from a Eurozone bank to the societal node is denominated in the euro and all obligations from a Greek bank to the societal node is denominated in drachmas.Finally, all interbank liabilities between two Greek banks is re-denominated in drachmas, otherwise all interbank liabilities remain in euros.In incorporating two assets we need to discuss the inverse demand function.Consider an inverse demand function of the form of that in Example <ref>.That is, let the first asset (euro) acts as the numéraire asset, i.e., F_1 ≡ 1, and let the inverse demand function for the second asset (drachma) be given byF_2(z)= f̂(z_2)ifz_2 ≥ 0 1/f̂(α^-1(-z_2)) ifz_2 < 0 for f̂(z) = 4tan^-1(-bz) + 3π/3πwhere b ≥ 0 is the market impact parameter and α(z_2) := z_2 f̂(z_2) is the number of units of euros being purchased when z_2 ∈ units of drachmas are being sold.See Remark <ref> for a discussion of the symmetry argument inherent in this choice of inverse demand function.We will first consider setting the market impacts to a fixed level, i.e. b = 10^-4, then considering the effects of changing the price impacts.Finally, we need to consider some payment utility and utility functions for this setting.We will consider that all firms will follow a priority regulation scheme (Example <ref> with μ = 2) in which they prioritize obligations in the local currency due to the preferences of the regulators.That is, Eurozone banks will prioritize payments in the euro and Greek banks will prioritize payments in the drachma.Additionally, we will assume that all firms (and the societal node) will follow the minimal trading utility function (Example <ref>).This follows from the presupposition that the initial exchange rate (without loss of generality set to F(0) = (1,1)^) would not be trusted by the various institutions due to fear of fire sales of the new drachma.Therefore, due to uncertainty in the “true” exchange rate, firms will be conservative and do as little trading as necessary. If instead we supposed that firms would want to maximize their assets in the euro as a flight-to-stability (Example <ref>) then we would see a total collapse of the drachma value but similar final results.Now, with the multiple currency network calibrated to the setting that Greece was forced out of the Eurozone, we can simulate this systemic event.For this case study we will only consider the setting without an initial shock, i.e. q_0 = F(0) and γ_0 = 0, and with price impacts given by b = 10^-4.Due to the choice of priority regulation scheme, the response to external stresses to the value of the drachma (i.e., with q_0,2 < 1) would only marginally impact the final equilibrium. Figure <ref> displays the updated prices F(∑_i = 1^n (x_i + [p̅_i ∧ y_i^*(q)] - y_i^*(q))) given an initial price of q = (1,q_2)^ after one iteration of the fixed point problem.We note that the resultant curve is continuous because of thecontinuity of the both the inverse demand function and the unique holdings y^*(q) (see the proof of Corollary <ref>).The marked point on the curve shows the equilibrium price of q^* = (1,0.44331), i.e., after clearing the value of the drachma would fall to 44.331% of its former value compared to the euro.At this equilibrium price, we found that three banks would fail, though none of these banks were situated in Greece.This is due to a large amount of Greek bank liabilities (interbank and to society) being drachma denominated and thus insulated from the fall in drachma value.Though the greater Greek economy would likely suffer under such a large currency move.However we found that both banks from Cyprus included in this dataset (Marfin Popular Bank and the Bank of Cyprus) and the Banco Comercial Portugues would fail due to large exposures to Greece.We would like to note that all three of these defaulting banks received a bailout or government intervention in either 2012 or 2013.While the Cypriot banks had, in relative terms, an order of magnitude more exposures to Greece than any other non-Greek bank, the Banco Comercial Portugues had the third highest relative exposure to Greece. Thus we are able to see that a crisis focused on Greece, with an endogenous stress to the rest of the Eurozone, is able to spread to institutions in the rest of the Eurozone.In particular, if our study had coupled the Grexit event with some exogenous stress to the different banks balance sheets, as would likely occur in such an event, we would find a larger number of defaults in both the Eurozone and Greece.We wish to finish this example by considering the effects of changing the price impact parameter b which previously was fixed at 10^-4.All other parameters are kept constant from the previous considerations.Notably this includes the assumption that there is no initial crisis and all price movements are the result of the actions of the firms under consideration.Figure <ref> displays the attained equilibrium prices under changes to the price impact parameter b. Notably even a small level of price impacts causes a large drop in the value of the Greek drachma in relation to the euro.This provides us with a level of confidence in the determined results we found for the price impact b = 10^-4 even though this inverse demand function was not calibrated to data in the manner that the balance sheets were.§ CONCLUSION In this paper we considered an extension of the financial contagion model of EN01 to allow for obligations in more than one asset.In doing so, we have written a mathematical model that incorporates more realistic elements to financial contagion including obligations in multiple currencies and allowing for solvent firms to purchase or sell assets beyond those required to satisfy obligations (as is generally assumed in, e.g., <cit.>). Under markets without price impacts, we proved the existence and uniqueness of the equilibrium portfolio holdings in which each firm is a utility maximizer.We then generalized this result to prove existence of clearing prices in markets with price impacts.Additionally, we consider the tâtonnement process to determine which equilibria to which the market would converge in the 2 asset setting. Numerical case studies were undertaken to demonstrate the utility of the proposed model and how the choice of payment regulatory framework may impact, e.g., the realized exchange rates.In particular, we consider a stylized example of the European, and specifically Greek, financial system under the counterfactual condition that the Greek drachma were reinstated.§ PROOFS FOR SECTION <REF> Fix q ∈^m_++. * Define G_i: [0,p̅_i] → to be the linear mapping G_i(p_i) = q^ p_i for any p_i ∈ [0,p̅_i].Noting that the (convex) budget constraint of P_i(y,q) is equivalently given by p_i ∈ G_i^-1((-∞,∑_k = 1^m q_k (x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k])]).Further, the upper bound from the budget constraint is nondecreasing in y.Utilizing the strict concavity of the payment utility function h_i to guarantee the uniqueness of the maximizer P_i(y,q) for any y ∈^n × m_+, it follows that P_i(·,q) is nondecreasing by Corollary 2(ii) of <cit.>.Now we wish to show that Y_i(·,q) is nondecreasing as well.That is, Y_i(y,q) ≤ Y_i(y',q) for any y,y' ∈^n × m_+ with y ≤ y'.Take y,y' ∈^n × m_+ with y ≤ y'.If P_i(y,q) ≠p̅_i then, by construction and the monotonicity of the payment function P_i(·,q), we findY_i(y,q) = P_i(y,q) ≤ P_i(y',q) ≤ Y_i(y',q).Let P_i(y,q) = p̅_i (and thus P_i(y',q) = p̅_i as well by the monotonicity of P_i(·,q)).Now define G_i: (p̅_i + ^m_+) → as the same linear map as above, i.e., G_i(e_i) = q^ e_i.As with the payment function, the feasible regions for Y_i(y,q) and Y_i(y',q) can, respectively, be provided byG_i^-1 ((-∞,∑_k = 1^m q_k (x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^k])]),G_i^-1 ((-∞,∑_k = 1^m q_k (x_i^k + ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j'^k])]).Again, the upper bound for this interval is nondecreasing in the portfolio holdings parameter.Under the assumption that Y_i(y,q) and Y_i(y',q) are unique maximizers, we apply Corollary 2(ii) of <cit.> to find Y_i(y,q) ≤ Y_i(y',q).Finally, we apply the Tarski fixed point theorem (see, e.g., Theorem 11.E of <cit.>) to the mapping Y(·,q): ^n × m_+ →^n × m_+ to recover the result.* From (<ref>) we know that (y_i^↑(q)-p̅_i)^+ ≥ (y_i^↓(q)-p̅_i)^+.By way of contradiction, assume there exists an institution i and asset k such that (y_i^↑ k(q)-p̅_i^k)^+ > (y_i^↓ k(q)-p̅_i^k)^+.This immediately implies ∑_i = 1^n q^(y_i^↑(q)-p̅_i)^+ > ∑_i = 1^n q^(y_i^↓(q)-p̅_i)^+ by q_k > 0 for every asset k.However,∑_i = 1^n q^(y_i^↑(q) - p̅_i)^+= ∑_i = 1^n q^(y_i^↑(q) - [p̅_i ∧ y_i^↑(q)])= ∑_i = 1^n (q^x_i + ∑_k = 1^m q_k ∑_j = 1^n a_ji^k [p̅_j^k ∧ y_j^↑ k(q)] - q^[p̅_i ∧ y_i^↑(q)])= ∑_i = 1^n q^x_i + ∑_k = 1^n q_k ∑_j = 1^n [p̅_j^k ∧ y_j^↑ k(q)] ∑_i = 1^n a_ji^k - ∑_i = 1^n q^[p̅_i ∧ y_i^↑(q)]= ∑_i = 1^n q^x_i + ∑_j = 1^n q^[p̅_j ∧ y_j^↑(q)] - ∑_i = 1^n q^[p̅_i ∧ y_i^↑(q)]= ∑_i = 1^n q^x_i = ∑_i = 1^n q^(y_i^↓(q) - p̅_i)^+where the last equality follows by applying the same operations to y_i^↓ in the reverse order. This provides our contradiction and thus (y_i^↑(q)-p̅_i)^+ = (y_i^↓(q)-p̅_i)^+ for every bank i. First, if bank i has positive equity, then by Lemma <ref>(<ref>) it immediately follows that y_i^↑(q) = y_i^↓(q). In particular this must be true for node 0 as it has positive equity by definition. For notational purposes, let E^+ := {i ∈{1,2,...,n}| y_i^↓(q) ≥p̅_i} be the set of firms with positive equity. Let us assume there exists some firm i ∉E^+ and asset k such that y_i^↑ k(q) > y_i^↓ k(q), then immediately the mark-to-market value of the equity of the societal node 0 satisfiesq^y_0^↑(q)= ∑_k = 1^m q_k ∑_j ∈ E^+ a_j0^k p̅_j^k + ∑_k = 1^m q_k ∑_j ∉E^+ a_j0^k y_j^↑ k(q)> ∑_k = 1^m q_k ∑_j ∈ E^+ a_j0^k p̅_j^k + ∑_k = 1^m q_k ∑_j ∉E^+ a_j0^k y_j^↓ k(q) = q^y_0^↓(q).But this is a contradiction to y_0^↑(q) = y_0^↓(q).First, we wish to prove that, given uniqueness (guaranteed by Corollary <ref>) of the portfolio holdings under a fixed price q, the equilibrium holdings y^*: [q,q] →^n × m_+ are continuous.Theorem A.2 of <cit.> guarantees that the graph of y^* is closed in the product topology.Now we note that the range space that the holdings can attain is, in fact, the convex and compact set ∏_i = 1^n E̅_i whereE̅_i = ∏_k = 1^m [0,1/q_k∑_l = 1^m q_l (x_i^l + ∑_j = 1^n a_ji^l p̅_j^l)].Therefore, by the closed graph theorem (see, e.g., <cit.>) continuity is proven.This allows us to directly apply the Brouwer fixed point theorem (see, e.g., <cit.>) to find an equilibrium priceq^* = F(∑_i = 1^n (x_i + [p̅_i ∧ y_i^*(q^*)] - y_i^*(q^*))). Define α: [q,q] →^m by α(q) := ∑_i = 1^n (x_i + [p̅_i ∧ y_i(q)] - y_i(q)). Additionally, consider V: [q,q] → to be provided byV(q) := γ_0^ q + Α(q) - ∑_k = 1^m ∫_1^q_k f_k^-1(p) dpwhere Α: [q,q] → is defined as the multivariate function with gradient α.For the moment we will assume that Α exists, at the end of this proof we show this is the case under the restricted m = 2 asset setting. With this construction we find that d/dtV(q_t) is negative semidefinite for any trajectory q_t of the tâtonnement process, i.e.,d/dtV(q_t)= (γ_0 + α(q_t) - F^-1(q_t))^(F(γ_0 + α(q_t)) - q_t) ≤ 0.This is because [γ_0 + α(q) - F^-1(q)]_k ≥ 0 if and only if F_k(γ_0 + α(q)) ≤ q_k since F_k(γ_0 + α(q)) = F_k(γ_0 + α(q) - F^-1(q) + F^-1(q)) and the monotonicity of the inverse demand function.In fact, d/dtV(q_t) = 0 if and only if q_t is an equilibrium price due to the same preceding argument.By LaSalle's invariance principle (see, e.g., <cit.>), the set of accumulation points of any trajectory is equivalent to the set of equilibrium prices.Further, since q_t ∈ [q,q] for every time t ≥ 0, the tâtonnement process approaches the set of clearing prices as t →∞.Finally, we wish to guarantee the existence of the function Α: [q,q] → so that its gradient is equal to α.For this purpose we restrict ourselves to the m = 2 asset setting since, functionally, we can consider our input q to be its second argument only (due to our choice of numéraire).That is, we can consider instead V(q_2) = γ_0,2q_2 + ∫_1^q_2α_2(p) dp - ∫_1^q_2 f_2^-1(p) dp.plainnat
http://arxiv.org/abs/1702.07936v4
{ "authors": [ "Zachary Feinstein" ], "categories": [ "q-fin.RM" ], "primary_category": "q-fin.RM", "published": "20170225185915", "title": "Obligations with Physical Delivery in a Multi-Layered Financial Network" }
Physical Research Laboratory,Ahmedabad - 380009, Gujarat,India Indian Institute of Technology Gandhinagar,Gandhinagar - 382424 , Gujarat,India Physical Research Laboratory,Ahmedabad - 380009, Gujarat,India Physical Research Laboratory,Ahmedabad - 380009, Gujarat,India We examine the dynamics associated with the miscibility-immiscibilitytransition of trapped two-component Bose-Einstein condensates (TBECs) of dilute atomic gases in presence of vortices. In particular, we consider TBECs of Rbhyperfine states, and Rb-Cs mixture. There is an enhancement of thephase-separation when the vortex is present in both condensates. In the case of a singly charged vortex in only one of the condensates, there is enhancementwhen the vortex is present in the species which occupy the edges atphase-separation. But, suppression occurs when the vortex is in the specieswhich occupies the core region. To examine the role of the vortex, we quenchthe inter-species interactions to propel the TBEC from miscible to immisciblephase, and use the time dependent Gross-Pitaevskii equation to probe thephenomenon of phase-separation. We also examine the effect of higher chargedvortex. 67.85.−d, 67.40.Vs, 67.57.Fg, 67.57.De Dynamics of phase separation in two species Bose-Einstein condensates with vortices D. Angom December 30, 2023 ===========================================================================================§ INTRODUCTIONMiscibility-immiscibility phase transition in a TBEC of dilute atomic gases, isa novel quantum phenomenon. It is also referred to as phase-separation, andprovides a scheme to understand the physics governing a wide range of processessuch as pattern formation, nonlinear excitations, dynamical and interface instabilities <cit.>. Further more, it is the key to gain insights on phenomena such as quantum phase transition and criticality,symmetry breaking phenomena, Kibble-Zurek mechanism <cit.>, collective modes <cit.> etc. In experiments, TBECs consisting of two differentatomic species <cit.>,different isotopes of the same atomic species <cit.> or two different hyperfine spin sates <cit.> have been realized. During the past two decades numerous theoretical studies have examined thestatic <cit.> and dynamical properties ofphase-separation <cit.>. From these studies it is clearthat in the Thomas-Fermi (TF) limit at zero temperature the relative values of the intra and inter-species interactions determine the miscibility or immiscibility of the condensates. The condition for the phase-separation is the inequality g_12 > √(g_11g_22), where g_12 is the inter-species interaction strength, and g_kk is the intra-species interaction of the kth species. Based on this the TBEC can be driven from one phase to other bytuning the interaction strengths. However, an important point to be noted isthat the derivation of the inequality assumes the TBEC to be in the groundstate, that is, in absence of topological defects and impurities in thecondensates. This aspect requires due investigation as there can be deviationsfrom the inequality when vortices are present in the condensates. The effectsof finite temperature on the dynamics of miscibility-immiscibilityphase-separation of a TBEC is a topic of recent interest <cit.>. Inaddition, suppression of phase-separation of a TBEC at finite temperatures hasbeen reported <cit.>. It has also been shown in theoreticalinvestigations that inclusion of kinetic energy terms in the total energyexpression of a TBEC, results in partial or complete suppression ofphase-separation <cit.>. This is to be contrasted with the TFapproximation where the kinetic energy term is neglected.In this work we theoretically investigate the effect of vortices on thedynamics of phase-separation in TBECs. An obvious way in which the vortices caninfluence the dynamics of phase-separation is through the centrifugal forcearising from the associated superfluid flow. Thus, depending on the species inwhich vortex is introduced there can either be enhancement or suppression ofphase-separation. In terms of experimental realizations, vortex in TBECs may be produced using the method of phase imprinting <cit.>, stirringof the condensates by Gauss-Laguerre laser beams <cit.>, rotating thetrapping potential <cit.>, through evaporative coolingprocess <cit.> or by interconversion between the two components of inthe case of a TBEC with two hyperfine states <cit.>. Other than theeffects on the dynamics of phase separation, vortices in condensates aretopological defects which are essential ingredient of several novel phenomena. For the present work we examine the effects of when a vortex is present in one of the condensate species in a TBEC, as well as vortices are present in boththe species. In addition, we also investigate the effects of the charge of the vortex, and it is expected that higher charged vortices shallhave a larger effect. However, equally important is the dynamics and stabilityassociated with a vortex with higher charge or vorticity. The paper is organized as follows. In Sec. <ref> we formulate the dynamics of phase-separation of a TBEC at zero temperature, in the Gross-Pitaeveskii framework, and discuss on the effects of centrifugalforce associated with vortex induced superfluid flows in the condensates.Sec. <ref> provides a brief description of the numericalschemes used to probe the phenomenon of phase-separation, and investigate onthe dynamics associated with it. In Sec. <ref>, wepresent the results describing the vortex induced enhancement or suppression inmiscibility-immiscibility transition of the TBECs depending on its presence in the species. We also report the results from our further investigations onthe dynamics in the presence of higher charged vortex. We conclude with thekey highlights of our finding in Sec. <ref>.§ THEORETICAL METHODSIn mean field approximation, the time evolution of the order parameters of an interacting, trapped TBEC system at T = 0K, are governed by a pair of coupledGross-Pitaeveskii (GP) equations [-ħ^2/2m_k∇^2 + V_k(𝐫) + ∑_j = 1^2 g_kj|Ψ_j(𝐫, t)|^2]Ψ_k(𝐫, t)=iħ∂Ψ_k(𝐫, t)/∂ t,where, k = 1, 2 is species index, Ψ_k is the condensate wavefunction of the kth species, and V_k(𝐫) is the trapping potential. The intra and inter-species interaction strengths are given byg_kk = 4πħ^2a_kk/m_k, and g_kj = 2πħ^2a_kj/m_kj,respectively. Here, a_kk and a_kj are the intra and inter-speciess-wave scattering lengths of atoms, m_k is mass of the kth species, and m_kj = m_km_j/(m_k+m_j) is the reduced mass. The order parametersor wave functions of each of the species are normalized to the total number ofatoms in the condensates N_k = ∫ d𝐫|Ψ_k(𝐫)|^2.With these considerations and definitions, the total energy of the TBEC systemis E = ∫ d𝐫[ ∑_k = 1^2( ħ^2/2m_k|∇Ψ_k(𝐫)|^2+ V_k (𝐫)|Ψ_k(𝐫)|^2+ g_kk/2|Ψ_k(𝐫)|^4) + g_12|Ψ_1(𝐫)|^2|Ψ_2(𝐫)|^2],where V_k is taken to be harmonic oscillator potential which is of the form V_k(𝐫) = V_k(x,y,z) = 1/2m_kω_k^2(x^2+ α_k^2y^2 + λ_k^2z^2).Here, ω_k is frequency of the trap along x direction, α_k, λ_k are the anisotropy parameters. For the present study, we consider the atoms of both species to be trapped in the same potential, that is, ω_1 = ω_2 = ω_x, α_1 = α_2 = α = ω_y/ω_x, and λ_1 = λ_2 = λ = ω_z/ω_x. Furthermore, we define the oscillator length to be a_ osc = √(ħ/(m_1ω_x)), andenergy quanta ħω_x which correspond to convenient length and energy scale of the system. To render the coupled GP equations in dimensionless form, wescale the co-ordinates to x̃ = x/a_ osc,ỹ = y/a_ osc, z̃ = z/a_ osc, time tot̃ = tω_x, and total energy to Ẽ = E/(ħω_x).The order parameters then follow the transformations Φ_k(x̃,ỹ,z̃) =√(a_ osc^3/N_k)Ψ_k(x,y,z).Defining m_ r = m_1/m_2, the total energy inEq. (<ref>) in dimensionless form is Ẽ = ∫ dx̃dỹ dz̃{N_1/2 [ |∇Φ_1|^2+ (x̃^2 + α^2ỹ^2 +λ^2z̃^2)|Φ_1|^2+ N_1g̃_11|Φ_1|^4 ]+ N_2/2 [m_ r|∇Φ_2|^2 + 1/m_ r(x̃^2 + α^2ỹ^2+ λ^2z̃^2)|Φ_2|^2+ N_2g̃_22|Φ_2|^4 ]+ N_1N_2g̃_12|ϕ_1|^2|Φ_2|^2}where, g̃_11 = 4 π a_11/a_ osc,g̃_22 = m_ r 4 π a_22/a_ osc and g̃_12 = 2π (m_1 + m_2)a_12/m_2a_ osc. For notational convenience, here after we drop the tilde from the transformed quantities. Thescaled coupled GP equations can then be expressed as [ - 1/2∇^2 + 1/2(x^2 + α^2y^2+λ^2z^2) + ∑_j = 1^2G_1j|Φ_j(x, y, z, t)|^2] Φ_1(x, y, z, t) =i∂Φ_1(x, y, z, t)/∂ t,and [ - m_ r/2∇^2 + 1/2m_ r(x^2 + α^2y^2+λ^2z^2) + ∑_j = 1^2G_2j|Φ_j(x, y, z, t)|^2] Φ_2(x, y, z, t) =i∂Φ_2(x, y, z, t)/∂ t,where, g_11 = N_14π a_11/a_ osc,g_22 = m_ r N_2 4π a_22/a_ osc and g_kj = N_j 2π(m_1 + m_2) a_kj/m_2 a_ osc. The TBEC system in our study is confined in a quasi-two-dimensional (quasi-2D) harmonic trap. This is achieved by considering the axial frequency of the trap, ω_z, to be much larger than the frequencies along x and y directions, therefore, λ≫ 1, and to maintain radial symmetry we take α = 1. This condition allows us to factorize the order parameters in thefollowing form Φ_k(x, y, z, t) = ψ_k(x, y, t)χ_k(z),where, χ_k(z) are normalized ground states of the condensates along theaxial direction. Substituting Eqns. (<ref>) inEqns. (<ref>), and then integrating over χ_k(z), we obtain thefollowing scaled coupled GP equations in 2D [- 1/2∇_⊥^2 + 1/2(x^2 + α^2y^2) + ∑_j = 1^2𝒢_1j|ψ_j(x, y, t)|^2] ×ψ_1(x, y, t) =i∂ψ_1(x, y, t)/∂ t,and [- m_ r/2∇_⊥^2 + 1/2m_ r(x^2 + α^2y^2) + ∑_j = 1^2𝒢_2j|ψ_j(x, y, t)|^2] ×ψ_2(x, y, t) =i∂ψ_2(x, y, t)/∂ t,where, ∇_⊥^2 = ∂_x^2 + ∂_y^2, 𝒢_11 = 2N_1√(2πλ)a_11/a_ osc, 𝒢_22 = m_ r 2N_2√(2πλ)a_22/a_ osc and 𝒢_kj = N_j(m_1 + m_2)√(2πλ)a_kj/m_2 a_ osc. With these definitions the time independent coupled GP equationsare [ - 1/2∇_⊥^2 +1/2(x^2 + α^2y^2)+ ∑_j = 1^2𝒢_1j|ψ_j(x, y)|^2]ψ_1(x, y)= μ_1ψ_1(x, y),and [- m_ r/2∇_⊥^2 + 1/2m_ r(x^2 + α^2y^2) + ∑_j = 1^2𝒢_2j|ψ_j(x, y)|^2] ×ψ_2(x, y) = μ_2ψ_2(x, y),where, μ_k is the chemical potential of the kth species condensate. §.§ Phase-separationIn the Thomas-Fermi (TF) limit <cit.>, depending on interactionstrengths, the system can exhibit two distinct phases, miscible or immiscible (phase-separated) <cit.>. In the miscible phase, the condensates overlapwith each other; whereas, they get spatially separated in immiscible phase.A measure to characterize these phases is the overlap integral <cit.> Λ = [ ∫∫ dxdyn_1(x, y) n_2(x,y) ]^2/[ ∫∫ dxdyn_1^2(x, y) ] [ ∫∫ dxdy n_2^2(x, y) ],where, n_k(x, y) = |ψ_k(x, y)|^2 is the density of thekth condensate species. A value of Λ = 1 implies complete overlap between the condensates or the two species are completely miscible, and complete phase-separation corresponds to Λ = 0. The criterion forphase separation, based on the Thomas-Fermi approximation and minimization ofthetotal energygiven in Eq. (<ref>), isg_12 > √(g_11g_22). It should, however, be mentioned that thiscondition is valid only at zero temperature, and in the absence of anytopological defects. There are deviations from this criterion at T≠ 0due to the presence of thermal atoms <cit.>. In addition, the superflowsassociated with vortices in TBECs are expected to influence this criterion.§.§ Effect of vorticesEmploying Madelung transformation to the order parameterΨ_k(𝐫, t), we can express super fluid velocity as𝐯_k = ħ∇θ_k/m_k, where,θ_k(𝐫, t) is the phase of the order parameter. Then, thepresence of a vortex in the condensate results in an additional superfluid flow (super-flow), and around it the phase of the order parameter changes by2π l, where l = ± 1, ± 2, ± 3 ... l is the charge or vorticity of the vortex. Considering the vortex induced super-flow as purely azimuthal, the velocity of the flow at a distance R from the vortex coreis <cit.>𝐯_k(R) = lħ/m_kR𝐞_ϕ .As a consequence of this superflow, the atoms in the condensate experience a radially outwardcentrifugal force of magnitude 𝐅_k(R) = l^2ħ^2/m_kR^3𝐞_R.>From this expression, it is evident that lower atomic mass is associated withstronger force; and the quadratic dependence on l implies that the force isindependent of the sign of vortex charge. Due to the centrifugal forcethe onset of phase-separation can be enhanced when the vortex is associatedwith the species that lies at the periphery at phase-separation. And,suppression when the vortex is associated with the species occupying thecore at phase-separation. Thus, as mentioned earlier, the presence of vortex modifies the criterion for phase-separation. This is investigated in more detail or in a quantitative way numerically. The stability of the vortex is dependent on the vortex charge. In a quasi-2Dsingle species condensates, a singly charged vortex is dynamically stable, andprecesses on an equidensity circular contour. But, a vortex of charge greaterthan unity is unstable, and spontaneously decays into multiple singly chargedvortices during evolution, even in absence of dissipation and externalperturbations <cit.>. In the case of a TBEC, in theimmiscible domain, the vortex core in condensate of one of the species is filledby the condensate atoms of the other species <cit.>, and the vortex isconsidered as coreless. Then, the superflow around the vortex in one condensateinfluences the other species, which results in an additional interaction amongthe condensates, and is responsible for a range of dynamical phenomena inTBECs <cit.>. However, stability of a higher charge vortexduring its evolution, is now dependent on the miscibility or immiscibility ofthe condensates together with its presence in the condensates of the species.In the TF-limit,the core size of a charge l vortex isξ_k = l/√(2n_k(𝒢_kk + 𝒢_kj)),where, n_k is taken to be the local TF density of the condensate at the trap center in absence of the vortex <cit.>. Considering the larger core size and centrifugal force with higher l, the enhancement or suppression of phase-seperation with a vortex in one of the species ismore pronounced with higher l. However, the dynamics of themiscible-immiscible transition would exhibit complex patterns as vortex with l>1 decays to vortices with unit charge. § NUMERICAL METHODS The first step of the computations is to obtain the equilibrium solution inthe miscible domain as the initial state. For this we numerically solveEqns. (<ref>) in imaginary time using the split-stepCrank-Nicholson method adapted for binary condensates. Furthermore, we use thenumerical procedure of phase-imprinting technique to introduce a vortex ofcharge l by taking <cit.> the order parameter as ψ_k(x, y) = |ψ_k(x, y)|exp[ iltan^-1(y-y_0/x-x_0)],where (x_0, y_0) is the location of the vortex in the condensates. To study the dynamics of phase-separation, we consider theequilibrium state solution obtained from the imaginary time propagation, andthen evolve it over real time. For the present purpose, theinter-species scattering length a_12 is adiabatically quenched from a value corresponding to the miscible phase of the TBEC to a value satisfying thephase-separation condition. The tuning of a_12 is experimentally possiblethrough magnetic Feshbach resonance. We investigate on the dynamics of the considered TBECs during this quench in absence and presence of a vortex in the condensates and then evolve them freely for 750ms to examine postquench dynamics of the systems. § RESULTS AND DISCUSSIONSAs a representative example to study the dynamics ofphase-separation in the presence of a vortex, we first consider the BEC mixture in the hyperfine states |F = 1, m_f = -1⟩≡ |1⟩ and |F = 2, m_f = +1⟩≡ |2⟩ of ^87Rb, which has been experimentally obtained to probe different static and dynamic properties of a TBEC <cit.>. For this mixture m_ r = 1 asm_1 = m_2. Following the experimental realization <cit.> we consider a rotationally symmetric harmonic trap withω_x = ω_y = 2π× 30.832Hz. And, to satisfy quasi-2D condition we consider ω_z = 100.0 ω_x so thatμ_k≪ħω_z, and take equal total number of atoms in thecondensates as N_1 = N_2 = 10^5. The intra-species scattering lengths, a_11 and a_22, are 100.4a_0 and 95.44a_0 <cit.>, respectively, where a_0 is Bohr radius. For these values, the TBEC is in the immiscible domain when a_12⩾ 97.9a_0. To steer theTBEC from the miscible to immiscible phase, we tune a_12 from 70a_0to 100a_0. As mentioned earlier, this is possible through the magnetic Feshbach resonance <cit.>. In the immiscible phase, the energetically favorable solution at equilibrium is ashell-structured geometry, in which the atoms having smaller scattering length in |2⟩ state, occupy the central region of the trap, here afterreferred to as the core-condensate. And, the atoms with the largerscattering length in |1⟩ state form a lower density shell aboutthe core-condensate, thus, referred to as shell-condensate. Thedensity profiles of the core and shell-condensate for a_12 = 100 a_0are shown in Fig. <ref>(c) and (d) .As an example of TBEC with unequal masses we consider ^87Rb-^133Cs TBEC <cit.>, referred to as Rb-Cs TBEC for compact notation, for this mixture m_ r≈ 0.65. The results for other TBECs like^87Rb-^39K, ^87Rb-^23Na, etc are expected to bequalitatively similar. For convenience, we label ^87Rb and ^133Cs tobe the first and second species, respectively, and take N_1 = N_2 = 10^4. Weconsider this mixture in a rotationally symmetric trap with ω_x=ω_y = 2π× 8Hz, and ω_z = 40.0 ω_x so that μ_k≪ħω_z. Theintra-species scattering lengths of the ^87Rb and ^133Cs atoms,a_11 and a_22, are 99a_0 <cit.> and 280a_0<cit.>, respectively.Hence, the phase-separation condition is a_12⩾ 162.8a_0. We drive the Rb-Cs TBEC from from themiscible to immiscible phase by varying a_12 from 50a_0 to 175a_0which is possible through magnetic Feshbach resonance <cit.>. In theimmiscible phase the ground state density distribution of the system has shell-structured geometry like in the previous system. However, despite ofinter-species scattering length of Cs is much larger than that of Rb, In theimmiscible phase the heavier Cs atoms occupy the central region of the trap orform the core-condensate, and the lighter Rb atoms are at the edge or form theshell-condensate. This is despite the much larger intra-species scatteringlength of Cs atoms as this configuration tends to minimize the total energy by lowering the contribution from the trapping potential. §.§ Dynamics of phase-separation without vortexAt initial time, the equilibrium state solution of the TBEC in Rb-hyperfine states is obtained in miscible phase by considering a_12 = 70a_0, and the corresponding density profiles of the condensates are shown inFig. <ref>(a) and (f). The condensates then have maximal overlap, and hence Λ = 0.99. Now, we increase a_12 at the rate of0.41a_0/ ms <cit.>. The evolution of thecondensate density profiles during the quench are shown in Fig. <ref>, and there is an increase in the total energy of the TBEC as the interaction energy increases. However, after phase-separation, when a_12 > 97.9a_0, the overlap between the condensates becomes negligible, and therefore, thecontribution to the total energy from the inter-species interaction isnegligible. On the other hand, the higher a_12 enhances the gradient of thedensity profiles at the interface, and as a consequence, the kinetic energiesof the condensates are increased. This in turn enhances the total energy. Inthis phase, the condensate of the |1⟩ species surrounds the condensateof the |2⟩ species in shell geometry. As example, the density profilesof the condensates at 71 ms, with a corresponding value ofa_12 =98.7a_0, are shown in Fig. <ref>(e) and (j). From thefigures, it is evident that the TBEC is in immiscible phase, andΛ = 0.04. We, therefore, stop quench after a_12 attains the valueof 100a_0 at 74ms. We then observe the free evolution of the densityprofiles. At later times, the condensates continue to be in this geometry while exhibiting oscillations in the overlap with frequencyν≈ 185Hz, which is larger than the radial trapfrequency ν_x = 30.832Hz.In a similar way, we obtain the initial equilibrium solution for the Rb-CsTBEC in the miscible by considering a_12 = 50a_0, and has Λ=1.0.We then quench a_12 by increasing at the rate of1.58a_0/ ms <cit.>. The adiabaticity ofthe quench is verified by obtaining the stationary ground state solutions ofthe TBEC at the intermediate values of a_12. As in the previous case, thetotal energy of the TBEC increases with the increase of a_12, and the timeevolution of the density profiles of the condensatesare qualitatively similar. After phase-separation, when a_12 > 162.8a_0, as mentioned earlier the Rb-condensate surrounds the Cs-condensate in a shellgeometry. In this geometry, the enhanced inter-species interaction makes thesize of the pancake-shaped Cs-condensate smaller than its size in the misciblephase. This reduces the trapping potential energy of the Cs-condensate; but,the enhanced density increases the interaction energy of the Cs-condensate.The quench is stopped at 79 ms when a_12 = 175a_0, and theoverlap between the condensates has Λ = 0.02. We, then, observe thefree evolution of the density profiles. At later times, the condensatescontinue to be in this geometry with an oscillation in the overlap at afrequency of ν≈ 80Hz. Like in the previous case, this is larger than the radial trap frequency ν_x = 8Hz. §.§ Presence of singly-charged vortex §.§.§ Vortices in both the condensatesTo examine the dynamics of the phase separation in the presence of a vortex in the Rb hyperfine TBEC, we consider the equilibrium state with the same set of parameters as previous. But, now we imprint singly charged vortices at thecenter of both the species. In experiments, this may be achieved by employing topological phase imprinting techniques <cit.>.After obtaining the equilibrium solution, like in the previous case, we quench a_12, to induce miscibility-immiscibility phase transition in the system. During the course of the evolution the vortices are displaced from the center and start to precess, and the density profiles are as shown in Fig. <ref>. During the quench there is an enhancement of the miscible-immiscible transition, which is evident from the trend in the value of Λ as shown in Fig. <ref>. From the figure there is a manifest faster decrease in Λ when vorticesare present in both the species.For the Rb-Cs as well we follow the same protocol of imprinting vortices inboth the species, and quenching a_12 at the same rate as it was done when the vortex was absent. Among the two condensates, due to the shorterhealing length, the vortex core size in the Cs-condensate is smaller than inRb. Here, the shorter healing length of Cs is on account of its larger mass and scattering length. Unlike in the case of the Rb-hyperfine TBEC, the vortices in the Rb-Cs TBEC remain at the center and the core size of the vortex inRb-condensate increases. Following the values of Λ during timeevolution, as shown in Fig. <ref>, it is evident thatthere is an enhancement in the miscible-immiscible transition. To investigatefurther we imprint vortices with opposite charges, and find that the trend inthe miscible-immiscible transition is independent of the sign of the vortexcharges. In other words, it is the presence of the superflow which influencesthe onset of the phase-separation, but the direction of the superflow does not impact on the transition. As evident from the comparison of the trends inFig. <ref> and Fig. <ref>, the effect ofthe vortices is more pronounced in the case of Rb-Cs. This is on account of thedifference in the masses and relative intra-species scattering lengths.§.§.§ Vortex in shell-condensateTo study the miscible-immiscible transition when vortex ispresent in only one of the species inRb-hypefine TBEC, we firstexamine the evolution of the TBEC with a vortex present only in thecondensate of species |1⟩. Like in the previous cases, we obtain theinitial state of the system in the miscible phase, and then, imprint a singly charged vortex at the center of the condensate of species |1⟩.In experiments the generation of a vortex in either the condensate ofRb hyperfine TBEC was demonstrated by M. R. Matthews et al. <cit.>. Theinitial density profiles of the condensates are as shown inFig. <ref>(a) and (f). As to be expected the vortex is core-less,that is, condensate of |2⟩ occupies the core of the vortex. Now,to observe the miscible-immiscible transition we quench a_12, and thedensity profiles during the quench are shown in Fig. <ref>. As the value of a_12 is increased, the core size of the vortex increases, and hence, larger number of atoms of species |2⟩ occupy the vortexcore. Since, the vortex is imprinted with the shell-condensate, as shown inFig. <ref>(c) and (d), there is an enhancement in themiscibility-immiscibility transition due to the centrifugal force associatedwith the vortex induced superflow. The enhancement is evident from the trend in Λ as shown in Fig. <ref>, and the effect is more pronounced compared to the presence of vortices in both the species. Another important observation is, vortex with higher charge leads to larger enhancement in the miscible-immiscible transition. This is to expected since, as discussed earlier, the centrifugal force is proportional to l^2, where l is the charge of the vortex. In the present case, there is an importantobservation, the vortices of higher charges are stable through the quench,and significant later times as well. This is in contrast to the case of singlespecies condensates, where vortices of higher charges are dynamically unstableand decays in singly charged vortices with short time scales. The stability ofa higher charge vortex in TBEC may be attributed to the immiscibility of theTBEC. Because, if the vortex decays to multiple vortices of lower charges itwould increase the inter-species interaction energy due to the filling ofthe vortex cores. In short, TBEC supports higher charge vortex in theimmiscible phase when the vortex is present in the shell-condensate. Similarly, for the Rb-Cs TBEC, we again obtain the initial equilibrium solutionin the miscible phase, and a singly charged vortex imprinted at the center of the Rb-condensate. It is to be mentioned here that, the Rbdespite of having smaller atomic scattering is the shell-condensate due to the smaller mass. In this case, the quench of a_12 leads to qualitativelysimilar resultsas in Rb-hyperfine TBEC. That is, the core size of the vortex increases during the quench, and the vortex inducedsuperflow in the Rb-condensate enhances the phase-separation. This evident from the trends in the values of Λ shown in Fig. <ref>. §.§.§ Vortex in core-condensate In this section we examine the dynamics of phase-separation when a vortex is present in the core-condensate. For this, like in the previous case, the initial state of the Rb-hyperfine TBEC is in the miscible phase, and a singly charged vortex is imprinted at the center of the |2⟩ condensate.The initial density profiles of the condensates are as shown inFig. <ref>(a) and (f). We then quench the system by increasing a_12 to drive the system to immiscible phase. The density profiles of the condensates at different times during the quench are shown inFig. <ref>. During this evolution, the core size of the vortex increases, and an increasing number of atoms from species |1⟩ occupy the vortex core. Thus, in the immiscible phase of the TBEC, the density profile of the condensate of the species |1⟩ acquires a bull's eye structureas shown in Fig. <ref>(e). From the trend in Λ, shown inFig. <ref>, it is evident that there is a suppression inphase-separation of the TBEC as the decrease in Λ slower than theprevious cases. The radially outward centrifugal force arising from the vortexleads to a larger radial size of the |2⟩ condensate, and thus the atomsof |1⟩ require larger inter-species repulsion energy to be the shell- condensate at phase separation. In other words, the vortex induced superflow inthe core-condensate is responsible for suppression of phase separation.>From similar computations, we also find the same trend in the Rb-Cs TBEC. In fact, the effect of suppression is more pronounced in this system, this is discernible by comparing the trends in the values ofΛ plotted inFig. <ref> and  <ref>.§.§ Higher charge vortex We now examine the dynamics of phase separation in presence of a higher charge vortex, in particular with core-condensates. Inexperiments, doubly and quadruply charged vortices are generated usingtopological phase-imprinting technique <cit.>. The cases of vortices in both the condensates or only with shell condensate are qualitatively similar to the cases of singly charged vortices. Like inthe previous cases, we obtain initial equilibrium state of the Rb-hyperfine TBEC in the miscible phase, but with a quadruply charged vortex imprinted at the center of species |2⟩ or the core condensate. Then, we quench a_12 from 70a_0 to 100a_0 for miscibility-immiscibility phase transition in the TBEC. During the quenchthe core size of the vortex increases, and it gets filled with the atoms of species |1⟩ as shown in Fig. <ref>. Hence, in the immiscible phase, the density profile of the condensate ofspecies |1⟩ has bull's eye structure with a higher density core-region and a lower density ring outside the condensate of species|2⟩. So, most of the atoms of species |1⟩ occupy the core region of the vortex, and is the consequence of larger core-size associated with the higher charged vortex. Thus, the overall configuration hasthe density profile of the species |2⟩ resembling the geometry shell-condensate. In other words, the presence of quadruply charged vortexforces the species with lower intra-species interaction to occupy the edges,and the species with higher intra-species interaction to occupy the core region by filling the vortex core. This can be referred to as the vortex inducedpartial position reversal at phase-separation. There is complete position reversal when we consider a vortex with charge higher than l=4.The position reversed geometry is important to study as it provides aframework to investigate the dynamics related to Rayleigh-Taylorinstability (RTI) in TBECs <cit.>. The instability sets inas the species initially occupying the core region is driven to the edge in thepresence of the higher charged vortex when a_12 is quenched. In the case ofquadruply charged vortex RTI is not observed, and the vortex induced azimuthalsuper-flow in the species |2⟩ is responsible for inhibition of RTI atthe interface of the condensates. This follows from the general result ofsuppression of RTI by the pressure gradient in the radialdirection <cit.>, in the present case arising from the Coriolis forceacting on atoms of species |2⟩. In a related work, the suppression ofRTI at the interface of rotating, immiscible, invisid classical fluids has been reported <cit.>. Although RTI doesn't occur, the system exhibits a rich dynamics associated with the prcession motion of the vortex during the post quench free evolution of the TBEC as shown in Fig. <ref>. The condensates continue to be in the bull's eye and shell geometryrespectively for sufficiently long time till ≈ 500ms. However, at later times, there is instability at the interface arising from the shearat the interface due to the superflow and decay of the higher charged vortex. The density profiles of the TBEC at selected times during this later evolution are evident from the density plots in Fig. <ref>.We obtain qualitatively similar results for Rb-Cs TBEC in the presence ofquadruply charged vortex in the Cs-condensate.Hence, in this case, as the TBEC is quenched to immiscible phase, theRb-condensate takes the bull's eye structure, and the density profile of theCs-condensate resembles with shell-condensate. § CONCLUSIONSIn presence of singly charged vortex in the shell-condensates of the TBECs,the centrifugal force associated with the vortex induced azimuthal superflowenhances the miscibility-immiscibility phase-separation. The same forceresulting from the vortex induced superflow in the core-condensates suppressesthe phase-separation. However, there is a net enhancement when singly charged vortices are present in both of the species. Compared to the Rb hyperfine TBEC,in Rb-Cs TBEC the centrifugal force experienced by Rb atoms is stronger.Hence, the enhancement or suppression of phase-separation due to the presenceof vortex is more prominent in Rb-Cs TBEC. The quadratic dependence of thecentrifugal force on the vortex charge, ensures the obtained results areindependent on the sense of circulation of the super-flows. The resultsfrom the Rb-Cs TBEC are generic to the TBECs in which the species haveconsiderable mass difference, and different intra-species interactions.Similarly, the results of the Rb hyperfine TBEC is generic to other TBEC of two hyperfine states, isotopes of the same elements or different atoms withnearly equal mass and scattering length. Thus, the cases considered isrepresentative of other TBEC. In presence of a vortex of quadruply charged vortex in the core-condensate, a phase-separated state of the TBECs isobtained in which the components of the TBECs partially swap their positionsin the shell-structured geometry in comparison with the case when the vortexis absent. From the post quench free dynamics, at later times there is aninstability at the interface and decay of the quadruply charged vortex.We thank S. Pal, K. Suthar and R. Bai for useful discussions. The results presented in this paper are based on the computations using Vikram-100, the 100TFLOP HPC Cluster at Physical Research Laboratory, Ahmedabad, India.apsrev4-1
http://arxiv.org/abs/1702.08204v1
{ "authors": [ "Soumik Bandyopadhyay", "Arko Roy", "D. Angom" ], "categories": [ "cond-mat.quant-gas", "physics.atom-ph" ], "primary_category": "cond-mat.quant-gas", "published": "20170227093907", "title": "Dynamics of phase separation in two species Bose-Einstein condensates with vortices" }
mymainaddress]Xuefeng Xiao xiaoxuefengchina@gmail.commymainaddress]Lianwen Jinmycorrespondingauthor [mycorrespondingauthor]Corresponding author lianwen.jin@gmail.commymainaddress]Yafeng Yang mymainaddress]Weixin Yang mysecondaryaddress]Jun Sun mymainaddress]Tianhai Chang[mymainaddress]School of Electronic and Information Engineering, South China University of Technology, Guangzhou, China [mysecondaryaddress]Fujitsu Research & Development Center Co. Ltd., Beijing, China Like other problems in computer vision, offline handwritten Chinese character recognition (HCCR) has achieved impressive results using convolutional neural network (CNN)-based methods. However, larger and deeper networks are needed to deliver state-of-the-art results in this domain. Such networks intuitively appear to incur high computational cost, and require the storage of a large number of parameters, which renders them unfeasible for deployment in portable devices. To solve this problem, we propose a Global Supervised Low-rank Expansion (GSLRE) method and an Adaptive Drop-weight (ADW) technique to solve the problems of speed and storage capacity.We design a nine-layer CNN for HCCR consisting of 3,755 classes, and devise an algorithm that can reduce the networks computational cost by nine times and compress the network to 1/18 of the original size of the baseline model, with only a 0.21% drop in accuracy. In tests, the proposed algorithm surpassed the best single-network performance reported thus far in the literature while requiring only 2.3 MB for storage. Furthermore, when integrated with our effective forward implementation, the recognition of an offline character image took only 9.7 ms on a CPU. Compared with the state-of-the-art CNN model for HCCR, our approach is approximately 30 times faster, yet 10 times more cost efficient.Convolutional neural network Handwritten Chinese character recognition CNN Acceleration CNN Compression § INTRODUCTION Offline handwritten Chinese character recognition (HCCR) has been applied to a number of applications, such as for recognizing historical documents, mail sorting, transcription of handwritten notes, and so on. Offline HCCR has drawn the attention of many researchers for over half a century <cit.>. In the last few years, a number of traditional offline approaches have been proposed to improve HCCR performance but have yielded scant progress; the modified quadratic discriminant function (MQDF) <cit.>-based methods are exemplary. There is hence a recognition in the literature that even the best traditional methods are far from mimicking human performance in this domain <cit.>. Due to the availability of better computational hardware and massive amounts of training data in recent years, convolutional neural networks (CNNs), proposed by LeCun in the 1990s <cit.>, have been used to attain state-of-the-art performance in character recognition <cit.>. The multi-column deep neural network (MCDNN) <cit.>, composed of several CNNs, was the first CNN used for HCCR. Zhang et al. <cit.> recently reported an accuracy of 96.95% for recognition by extracting the traditional normalization-cooperated direction-decomposed feature map as input with a CNN. However, the computational cost and storage requirements still prevent the use of CNNs in portable devices, where power consumption and storage capacity are the major challenges.Many researchers have tried to build fast and compact networks. In this vein, low-rank expansion <cit.> aims to reduce computational cost by decomposing the convolutional layer. According to <cit.>, network pruning is the most effective way to compress the CNN; it eliminates the redundant connections in each layer, following which weight quantization and Huffman encoding are applied to further reduce storage capacity. Although <cit.> achieved impressive performance in accelerating and compressing the network, only a few studies have combined these methods to address the dual challenge of speed and storage capacity. Furthermore, to the best of our knowledge, no study has investigated whether these methods are still feasible for large-scale handwritten Chinese character recognition involving more than 3,700 classes of characters.In this paper, we propose a method to build a fast and compact CNN-based HCCR classifier. The method is shown in Fig. <ref>; it unifies the advantages of low-rank expansion and network pruning. The first part employs low-rank expansion to decompose the convolutional layer for acceleration purposes, which renders the CNN deeper but compact. The motivation underlying the second part is to remove redundant connections in each layer's to further reduce the storage allocated to parameters and, hence, the computational cost of the entire network. However, in a previous study <cit.> on network pruning, the authors used a fixed threshold to prune the connections of each layer. Instead, we propose an Adaptive Drop-weight (ADW) technique that dynamically increases the threshold and gradually prunes out the parameters of each layer. Here, it comes another problem of the pruning ratio of each layer in previous work <cit.>, which may require numerous attempts for the determination of a suitable threshold for each layer in the context of a trade-off between accuracy drop and compression ratio, especially for some deep networks. Hence, to better address this problem, we propose Connection Redundancy Analysis (CRA) that can analyze redundancy in the connections of each layer and help maximize the pruning ratio of each layer with a tolerable reduction in accuracy.In experiments involving offline HCCR, the proposed framework reduced by nine times the computational cost of, and by 18 times the parameter storage needed for, the designed CNN; and it degraded accuracy only by 0.21%, which still surpassed the results for the best single-network CNN, reported in the literature thus far, on the ICDAR 2013 Offline HCCR Competition database. The network required only 2.3 MB of storage and took only 9.7 ms to process an offline character image on a single-thread CPU. Moreover, in order to further boost performance, we can increase the width and depth of the networks, or use a new CNN model such as GoogLeNet <cit.> or Deep ResNet <cit.>. This may help finally obtain new benchmarks for offline HCCR, but this is not our main concern in this paper.The remainder of this paper is organized as follows: Section 2 reviews related work, and Section 3 elaborates on the architecture of the baseline network of the CNN used in our system. Section 4 introduces the adaptive drop-weight technique, whereas Section 5 details the connection redundancy analysis method. Section 6 describes global supervised low-rank expansions in detail, and Section 7 presents the experimental results, which include run time, parameter storage, and accuracy. The conclusions of this study and our future work are summarized in Section 8.§ RELATED WORK§.§ Offline HCCR Due to the success of CNNs, MQDF-based methods for offline HCCR have already reached their limit. The multi-column deep neural network (MCDNN) <cit.>, consisting of several CNNs, was the first CNN used for offline HCCR. In an offline HCCR competition subsequently organized by ICDAR in 2013 <cit.>, the method developed by the team from Fujitsus R&D Center won with an accuracy of 94.77%. In 2014, they improved accuracy to 96.06% by voting on four alternately trained relaxation convolutional neural networks (ATR-CNN) <cit.>. Zhong et al. <cit.> subsequently proposed combining traditional Gabor features with offline Chinese character images as network inputs, and used a streamlined version of GoogLeNet called HCCR-Gabor-GoogLeNet. They reported an accuracy of 96.35%, and then that of 96.74% for ensembling ten model and become the first one beyond human performance. The framework proposed by Zhou et al. <cit.> is based on HCCR-GoogLeNet <cit.>; they used a Kronecker fully connected (KFC) layer to replace the layers after the four inception groups, and then followed by two fully connected layers, finally obtaining an accuracy of 96.63%. Zhang et al. <cit.> recently combined traditional normalization-cooperated direction-decomposed feature maps and CNNs to obtain accuracy values of 96.95% and 97.12% by voting on three models. §.§ Accelerating and Compressing Most CNN structures, such as VGGNet <cit.>, AlexNet <cit.>, CaffeNet <cit.>, and GoogLeNet <cit.>, have similar properties: for example, the convolutional layers incur most of the computational cost and the fully connected layers contain the most network parameters. Despite the different potential avenues, existing approaches mainly concentrate on accelerating the convolutional layers and compressing the fully connected ones.To reducing the computational cost of the convolutional layers, Cong and Xiao <cit.> used the Strassen algorithm for fast matrix multiplication to reduce the arithmetic complexity of the convolutional layer without loss in accuracy. Mathieu et al. <cit.> adopted the fast Fourier transform (FFTs) to convert convolutional calculations into pointwise products in the frequency domain for fast computation. Lavin et al. <cit.> proposed using Winograds minimal filtering algorithms to reduce the multiplication in the convolutional layers. Wu et al. <cit.> recently proposed quantized convolutional neural networks that quantize the weights and transform computations into inner products in the convolutional layer. Nevertheless, computations of the convolutional layer are transformed into matrix multiplication by using the im2col algorithm <cit.> and the BLAS (Basic Linear Algebra Subprograms) library. These tools are useful for faster CPU implementation of CNNs and cannot be used with the previously proposed method in <cit.>. In this paper, we use the low-rank expansion-based method that can combine the matrix multiplication method by using the BLAS library. Jaderberg et al. <cit.> exploited the cross-channel or filter redundancy to formulate a low-rank basis for filters, and proposed filter and data reconstruction techniques for optimization. Zhang et al. <cit.> improved their work by considering a non-linear case and asymmetric reconstruction for multiple layers to mitigate reconstruction error.For fully connected layers, HashedNets proposed by Chen et al. <cit.> uses the hash function to group weights into hash buckets, where connections in the same hash buckets share parameter value. Vanhoucke et al. <cit.> used an eight-bit fixed-point integer to replace the 32-bit floating point. Matthieu et al. <cit.> proposed binarized neural networks that constrain weights and activations to +1 or -1, and replace most floating-point multiplications by one-bit exclusive-NOR operations. It is clear that this can reduce computational cost and parameter storage but, on the other hand, degrades network performance. Lin et al. <cit.> used SVD-based low-rank expansions to compress the fully connected layers, and then used global error reconstruction to fine-tune the entire network. However, both these methods have low compression ratios, or seriously deteriorate network performance. Methods based on network pruning <cit.> can significantly reduce parameter storage by learning important connections without compromising network performance. Deep compression was proposed by Han et al. <cit.> to further reduce storage by combining network pruning, weight quantization, and Huffman coding. Guo et al. <cit.> proposed dynamic network surgery that can dynamically prune and splice connections based on Han's work <cit.>.§ ARCHITECTURE OF CONVOLUTIONAL NEURAL NETWORK As shown in Fig. <ref>, we designed a nine-layer (only accounting for the convolutional layer and the fully connected layer) network for offline HCCR consisting of seven convolutional layers and two fully connected layers. Each of the first three convolutional layers are followed by a max-pooling layer. Following this, every two convolutional layers are followed by a max-pooling layer. The last max-pooling layer is followed by a fully connected layer, which contains 1,024 neurons. The last fully connected layer contains 3,755 neurons, and is used to perform the final classification. The overall architecture can be represented as Input-96C3-MP3-128C3-MP3-160C3-MP3-256C3-256C3-MP3-384C3-384C3-MP3-1024FC-Output.We found that within a certain range, increasing the size of the input character image improved classification performance, but incurred higher computational cost. Hence, we fixed this effect of increasing size and computational cost by resizing the input characters into 96×96. In our baseline networks, all convolutional filters were 3×3, and a pixel was added to retain the size. Finally, the max-pooling operation was carried out over a 3×3 window with a stride of 2.In our proposed network, the parametric rectified linear unit (PReLU) <cit.>, slightly different from the rectified linear unit (ReLU) <cit.>, was used to enable the network to easily converge and minimize the risk of overfitting to boost performance. Ioffe et al. <cit.> proposed batch normalization (BN), which can normalize nonlinear inputs and stabilize the distribution by reducing the internal covariate shift. It not only provides the liberty of using higher learning rates to expedite network convergence, but also ameliorates network performance at a negligible computational cost and storage. Moreover, for some deep networks, BN can effectively solve the problem of vanishing gradients. Therefore, all convolutional layers and the first fully connected layer were equipped with a BN layer, and the PReLU were added to each BN layer. Since the fully connected layers are quite redundant, we added the dropout <cit.> layer between the two fully connected layers for regularization, where the ratio was set to 0.5.The main difference between our proposed model and other available models for offline HCCR is that the former involves BN and PReLU in the network; hence, we refer to this baseline network as the HCCR-CNN9Layer. Although the CNN model used is quite simple, it is yielded state-of-the-art performance for HCCR.§ ADAPTIVE DROP-WEIGHT Our pruning scheme is shown in Fig. <ref>; it consists of two parts. The first a new technique called Adaptive Drop-weight (ADW), which can gradually prune out the weighted connections of each layer by dynamically increasing the pruning threshold. When the pruning ratio reaches a value determined by the results of the CRA for the layer, we remember the threshold for further pruning.§.§ Pruning Threshold In the previous work on network pruning <cit.>, a fixed threshold was determined as follows:P_th = α/N∑_i = 1^N | w_i|+ β√(1/N∑_i = 1^N (w_i - 1/N∑_i = 1^N w_i)^2)+ λ. The pruning threshold P_th depends on the weight of the layer w_i in terms of average absolute value and variance. In order to render P_th suitable for each layer, parameters α,β,λ are selected by their empirical rules <cit.>. However, if the fixed threshold is too high, it leads to the pruning of a large number of connections at the outset, which results in drastic drop in performance. On the contrary, if the fixed threshold is too low, the compression ratio may be far from the desired value. In order to solve this problem, we propose using a dynamically increased threshold that gradually prunes out the weighted connections of each layer. This methodology can gradually lead the network toward self-adaptive pruning connections. §.§ Pruning Training Scheme In order to gradually prune redundant connections from each layer, we prune connections after every I iterations (in experiments, we set I = 10). If we intend to prune the ratio r_i in each layer that contains N_i weights within T_1 iterations, the pruned number p_i is increased by r_iN_iI/T_1 in each pruning iteration. The threshold is also gradually increased. During the iterations without the pruning process, the weights are updated with a gradient, and the pruned weights never come back. Once the desired pruning ratio is reached, the increasing threshold is fixed and noted for further pruning of the layer until pruning ends after T_2 iterations. This pruning process has been described in detail in Algorithm <ref>.In order to further compress the network and improve performance, we employ the strategy proposed in <cit.> to quantize weights. A k-means clustering algorithm is used to cluster weights for each layer of a pruned network. The quantized pruned network is then fine-tuned, which may result in better network performance. § CONNECTION REDUNDANCY ANALYSIS The deep neural network consists of many layers, and each plays a significant role in the network. There are inevitably various redundancies in each layer, especially in the large gap between the convolutional layer and the fully connected layer. It makes sense that pruning ratio be determined by the redundant connections of each layers, and that the same pruning ratio not be applied to all layers. Nevertheless, retrospective work by <cit.> involved a fixed threshold P_th based on the relevant layer's weights to prune the connection. Hence, numerous experiments are needed to find the pertinent values of α,β, and λ for the pruning threshold P_th for each layer; as is self-evident, this is very time consuming, especially for deep networks.In order to better address the above issue, we propose a Connection Redundancy Analysis (CRA) method that analyzes each layer's redundancy and can help us set a suitable value for the pruning ratio r_i for it. Inspired by <cit.>, a sensitivity analysis was carried out to analyze the importance of a layers parameters for network performance. Iandola et al. <cit.> implemented the strategy of directly pruning half the parameters with the smallest absolute values and carried out the experiment separately for each layer. After testing the pruned networks, network performance was examined. However, this strategy can only highlight the important parameters of a given layer with regard to performance, and cannot help determine how many connections are redundant.To carry out the Connection Redundancy Analysis, we separately conducted an experiment for each layer. While carrying out the experiment on a layer, we fixed the weights of other layers and pruned only that layer. By using our proposed Adaptive Drop-weight as pruning strategy, we gradually pruned each layer's redundant connections, which were thought to gradually degrade network performance. When the drop in accuracy was beyond a given tolerance level, we knew how many connections had been pruned, which guided us in further pruning the network.Since the proposed CRA was implemented to prune out the layers separately, it is difficult to analyze the scenario where all layers are pruned together. However, it may guide us in setting a proper pruning ratio for each layer. The ultimate goal of the CRA is to maximize the compression ratio under a tolerable reduction of accuracy rate, which desires further research.§ GROBAL SUPERVISED LOW RANK EXPANSION§.§ Decomposition Scheme For the original convolutional layer illustrated in Fig. <ref>, the input feature map is a three-dimensional (3D) vector X ∈ℝ ^C × H × W, where C is the channel of the input feature map, and H and W are its height and width, respectively. The output feature map is also a 3D vector Y ∈ℝ ^N × H' × W', where N is the channel of the output feature map, and H' and W' are its height and width, respectively. The kernel matrix is a 4D vector W ∈ℝ ^C × N × K × K, where the size of the kernel is K × K. The output feature map can be calculated byY (n,h',w') = ∑_c = 1^C ∑_i = 1^K ∑_j = 1^K W(n,c,i,j)X(c,h' + i - 1,w' + j - 1).We know that the computational cost of the direct convolutional layer is O(CNK^2H'W').By carrying out the low-rank expansion shown in Fig. <ref>, that the input feature map originally convolved with the square filter, will be transformed into the input feature map convolved with two low rank filters. The first one is the input convolved with the vertical kernel T ∈ℝ ^C × D × K × 1, where D is the output feature number the decomposed layer. The first output isM (d,h',w) = ∑_c = 1^C ∑_i = 1^K T(d,c,i,1)X(c,h' + i - 1,w),where the computational cost by the first convolution is O(CDKH'W). Then the output M ∈ℝ ^D × H' × W convolves with horizontal kernel V ∈ℝ ^D × N × 1 × K, and the final output is calculated byY (n,h',w') =∑_d = 1^D ∑_j = 1^K V(n,d,1,j)M(d,h',w' + j - 1).The computational cost by the second convolution is given by O(NDKH'W'). If the two low-rank expansions are considered together, the computational cost isO(DKH'(NW' + CW)). So if we want to accelerate x time for the convolutional layer, D can be determined asD = CNKW'/(CW + NW')x.§.§ Training scheme In past work <cit.>, the output of each layer was used as a supervisor to learn the low-rank filter for that layer. This method was mainly devised to minimize the reconstruction error between the local output and the low-rank approximation output, as shown in Fig. <ref>. We refer to this strategy as Local Supervised Low-rank Expansion (LSLRE). We think that while using the output of the local layer to guide the low-rank expansion is a reasonable and straightforward strategy, it does not a direct relationship with the performance of global classification.Thus, we propose Global Supervised Low-rank Expansion (GSLRE), which uses the label as supervisor. The training scheme is shown in Fig. <ref>. The training process is conducted in a layer-by-layer manner. For a specific layer, the original convolutional layer, say, the second layer Conv2 is decomposed into two smaller layers, Conv2_de1 and Conv2_de2(see Fig. <ref>). The parameters of Conv2_de1 and Conv2_de2 are determined through back-propagation using the SGD algorithm, based on the loss function for the entire network. It is worth mentioning that during the training of the specific convolutional layer, the parameters of other convolutional layers are kept fixed. This is because our network is equipped with the BN layer, which can enable gradients to smoothly pass into lower-level layers even though the network deepens.Since the first convolutional layer is hard to approximate (as mentioned in <cit.>), and plays an important role in extracting features from the original image, we begin our low-rank training scheme at the second convolutional layer. Once the second layer has been decomposed and trained adequately, we begin decomposing the third layer. When training the third convolutional layer, the parameters of both the second and the third layers are learned and updated according to the SGD-BP algorithm. In this way, all convolutional layers are decomposed and trained. Finally, since the first convolutional layer and the parameters of the fully connected layer are always fixed during the above low-rank expansion training process, we need to fine-tune the whole network to further improve overall performance.§ EXPERIMENT We evaluated our method on an offline HCCR task. The training data was taken from the CASIA-HWDB1.0 and CASIA-HWDB1.1 <cit.> databases, written by 300 and 420 people, respectively. The datasets contained 2,678,424 samples in total. The test data were datasets used in the ICDAR-2013 offline competition <cit.>, which contained 224,419 samples written by 60 people. The training and testing databases were written by different people, and contained 3,755 classes. §.§ Baseline Model We trained our baseline network (Fig. <ref>) on the Caffe <cit.> deep learning platform. We used the method of mini-batch gradient descent with momentum for training. The mini-batch size was set to 128 and momentum to 0.9. In order to make training data independent of each mini-batch, we shuffled the data before training. Since our proposed network is equipped with a BN layer, it allowed us to use higher learning rates to accelerate converge. We hence initialize the learning rate at 0.1, and then reduced it × 0.1 every 70,000 iterations. Training was completed after 300,000 iterations, and we obtained an accuracy of 97.30%.Fig. <ref> shows our proposed network. We realized that the first convolutional layer plays an important role in extracting features from the original image <cit.>, and incurs lower computational cost and smaller parameter storage (less than 1% of the entire network). Thus, it is intuitive that the parameters of this layer should not be modified. §.§ The evaluation of the Global Supervised Low Rank Expansions By using our proposed architecture, the baseline network was accelerated fourfold. Using Eq. <ref>, we calculated the number of feature maps each convolutional layer after decomposing it. In Table <ref>, it is clear that we were able to accelerate the network fourfold with a negligible drop in accuracy by integrating our devised Global Supervised Low-rank Expansion training scheme. It was also shown that our decomposing training scheme can obtain better results than the direct training of a decomposed network architecture. §.§ The evaluation of the Adaptive Drop-weight We first applied our method to the MNIST database with the LeNet-300-100 and the LeNet-5 networks <cit.>. The MNIST dataset was designed for character recognition of handwritten digits. LeNet-5 is a convolutional network that contains two convolutional layers and two fully connected layers. LeNet-300-100 is a fully connected network with two hidden layers. The baseline models were trained on the Caffe <cit.> deep learning platform without any data augmentation. We directly trained the LeNet-5 using the settings for the training parameters provided by Caffe. In this way, an accuracy of 99.11% was obtained after training for 10,000 iterations. The training parameter settings of LeNet-300-100 were nearly identical to those for LeNet-5, and yielded an accuracy of 98.33%.As shown in Table <ref>, with our proposed pruning strategy, we compressed LeNet-5 by a factor of 133 and LeNet-300-100 by that of 60 using the proposed ADW method. It surpassed the results in <cit.>, which was the first application of network pruning for compression. Compared with recent work <cit.>, we achieved a higher pruning ratio and better accuracy, especially for LeNet-300-100. We also combined the methods of quantizing weights for further compression. Finally, we obtained a state-of-the-art compression ratio of 250 times for LeNet-5 and 113 times for LeNet-300-100 without any loss in accuracy.Following this, we applied our proposed compression framework on an offline HCCR network that was accelerated fourfold, and contained 13 convolutional layers and two fully connected layers. In Table <ref>, it is evident that the entire network was compressed to approximately a quarter of its size using only the proposed ADW method. When we integrated the weight quantization, storage was further reduced approximately 14-fold with a drop of only 0.18% in accuracy.While pruning LeNet-5, we noticed that separately pruning the convolutional and the fully connected layers was a better choice to deal with the vanishing gradient problem than pruning these layers together, which was the strategy used in past work <cit.>. However, since our accelerated network was equipped with the BN layer, it enabled the gradient to smoothly pass in both forward and backward propagations, as also demonstrated in <cit.>. In our experiment, we were able to prune the convolutional layers and fully connected layers together. This not only reduced training time for the pruning process, but also yielded higher accuracy and compression ratio at the same time. §.§ The evaluation of the Connection Redundancy Analysis We implemented our CRA on each layer in LeNet-5 and LeNet-300-100. With our proposed Adaptive Drop-weight pruning strategy, we gradually pruned the connections of each layer.In Fig. <ref> and <ref>, we see that at the start of the experiment, the network was less susceptible to the pruning ratio; but later on, drastically decreases down with the higher values of this pruning ratio. However, since each layer had different redundant connections, the pruning ratio of each was different. In our experiment, CRA was implemented with a tolerable accuracy drop of 0.1% for each layer. Then, the pruning ratio was used to guide the pruning of the network.In the same way, CRA was applied to an offline HCCR network, which reduced computational cost fourfold in the convolutional layer. We then analyzed each layer's redundancy with a tolerable accuracy drop of 0.1% to guide us in pruning. Since the convolutional layer had been accelerated four times, the redundancy therein was significantly eliminated. As shown in Fig. <ref>, we set a much lower pruning ratio than the CRA results in convolutional layer to maintain accuracy at par. §.§ The results of accuracy Table <ref> illustrates the results of different methods that achieved performance beyond human level on the ICDAR-2013 offline competition database as well as their network storage and FLOPs (multiply-adds) in detail.We use our nine-layer network, shown in Fig. <ref>, and achieved an accuracy of 97.30%. We realized that the BN and the PReLU layers are quite effective for offline HCCR. Using our proposed GSLRE and ADW, we further reduced computational cost by nine times and parameter storage by 18 times with a only drop of 0.21% in accuracy. This result still surpassed the best single-network performance in line 7 in Table <ref>, but our model simultaneously involved considerably less parameter storage and incurred lower computational cost.It was clear that the larger and deeper the network, the better it performed. Hence, based on our proposed network, we added more convolutional layers: Input-96C3-MP3-128C3-128C3-MP3-192C3-192C3-MP3-256C3-256C3-MP3-384C3-384C3-384C3-MP3-1024FC-Output. We refer to this large network as HCCR-CNN12Layer, which yielded an accuracy of 97.59%, as shown on line 12 in Table <ref>. Then, combining our GSLRE and ADW, we were still able to reduce computational cost 16-fold and parameter storage 10-fold with only a drop of 0.19% in accuracy. §.§ The results of the forward implementation The run time of the network is crucial for applying offline HCCR to deal with real-time tasks. Other techniques can be deployed for accelerating CNNs for real-time applications. Loop unrolling (LU) is a well-known and efficient strategy to improve speed, especially for large loops. Using the im2col algorithm, convolutional computations were converted into matrix-matrix multiplication using the BLAS library[In the following experiments, we used Intel MKL as the BLAS library, available at https://software.intel.com/en-us/intel-mkl.], which has been shown to be an efficient way for CPU-based implementation of CNNs. By using the BLAS library, the fully connected layers were directly implemented into matrix-vector multiplication. Moreover, when we eliminated connections in each layer using our proposed ADW method, we used sparse matrix-matrix multiplication and sparse matrix-vector multiplication, respectively, for the convolutional layer and the fully connected layer. However, we found that if the layer was not sparse enough, performance degraded. In our proposed network, we simply applied sparse matrix-vector multiplication only to compute the fully connected layer.We compared the forward run time with different strategies on a single-threaded CPU. The experiments were carried out on a single desktop PC equipped with 3.60 GHz Intel Core i7-6700 and 16 GB of memory. From Table <ref>, we see that when we did not use a technique to accelerate the CNN, the run time was long (1369 ms per character). When we simply used loop unrolling for all layers, the run time was reduced. When we used our acceleration method and reduced computational cost fourfold, the run time was also reduced approximately fourfold (from 492 ms to 118 ms). Then, all convolutional layers and the fully connected layer were computed by adopting matrix-matrix and matrix-vector multiplication, respectively, with the BLAS library. Loop unrolling was also applied to all other layers. The run time decreased significantly. Finally, using our compression method to prune redundant connections in the convolutional layers and the fully connected layers, we employed sparse matrix-vector multiplication to implement the computations in the fully connected layer. In this way, we achieved a fast and compact CNN model for large-scale HCCR with a speed of 9.7 ms/char but only 2.3 of MB storage. We compared the forward run time implemented by Zhang et al. <cit.>(the last row) with that of our model. The proposed forward implementation method was clearly more effective than Zhang's <cit.> method: it was approximately 30 times faster but 10 times smaller. The source code of our fast and compact CNN model's forword implementation will soon be made publicly available.§ CONCLUSION In this paper, we proposed an effective approach for accelerating and compressing a CNN for large-scale HCCR involving 3,755 classes of Chinese characters. We proposed a Global Supervised Low-rank Expansion to accelerate calculations in the convolutional layers, and an Adaptive Drop-weight method to remove redundant connections by using a dynamic increase in the pruning threshold of each layer. We also proposed Connection Redundancy Analysis technology to analyze redundant connections in each layer in order to guide the pruning of the CNN without compromising the performance of the network.In future work, we plan to apply the proposed framework to other fields, such as image classification and object detection. These ideas can also be used to address deep recurrent neural networks <cit.>, especially for long short-term memory, as they are viable deep-learning models to deal with such time sequence-based problems as online handwritten character/text recognition <cit.>.model2-names 10 url<#>1urlprefixURL href#1#2#2 #1#1kimura1987modified F. Kimura, K. Takashina, S. Tsuruoka, Y. Miyake, Modified quadratic discriminant functions and the application to chinese character recognition, IEEE Trans. Pattern Anal. Mach. Intell. 9 (1) (1987) 149–153.jin2000deformation L. Jin, J. Huang, J. Yin, Q. He, Deformation transformation for handwritten chinese character shape correction, in: Proceedings of Advances in Multimodal Interfaces (ICMI), 2000, pp. 450–457.dai2007chinese R. Dai, C. Liu, B. Xiao, Chinese character recognition: history, status and prospects, Frontiers of Computer Science in China 1 (2) (2007) 126–136.long2008building T. Long, L. Jin, Building compact MQDF classifier for large character set recognition by subspace distribution sharing, Pattern Recognition 41 (9) (2008) 2916–2925.liu2013online C. Liu, F. Yin, D. Wang, Q. Wang, Online and offline handwritten chinese character recognition: Benchmarking on new databases, Pattern Recognition 46 (1) (2013) 155–162.zhang2017online X. Zhang, Y. Bengio, C. Liu, Online and offline handwritten chinese character recognition: A comprehensive study and new benchmark, Pattern Recognition 61 (2017) 348–360.yin2013icdar F. Yin, Q. Wang, X. Zhang, C. Liu, ICDAR 2013 chinese handwriting recognition competition, in: Proceedings of International Conference on Document Analysis and Recognition (ICDAR), 2013, pp. 1464–1470.le1990handwritten Y. LeCun, B. E. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. E. Hubbard, L. D. Jackel, Handwritten digit recognition with a back-propagation network, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), Morgan-Kaufmann, 1990, pp. 396–404.lecun1998gradient Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proceedings of the IEEE 86 (11) (1998) 2278–2324.cirecsan2013multi D. C. Ciresan, U. Meier, Multi-column deep neural networks for offline handwritten chinese character classification, in: Proceedings of International Joint Conference on Neural Networks (IJCNN), 2015, pp. 1–6.denton2014exploiting E. L. Denton, W. Zaremba, J. Bruna, Y. LeCun, R. Fergus, Exploiting linear structure within convolutional networks for efficient evaluation, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), 2014, pp. 1269–1277.jaderberg2014speeding M. Jaderberg, A. Vedaldi, A. Zisserman, Speeding up convolutional neural networks with low rank expansions, in: Proceedings of British Machine Vision Conference (BMVC), 2014.zhang2015efficient X. Zhang, J. Zou, X. Ming, K. He, J. Sun, Efficient and accurate approximations of nonlinear convolutional networks, in: Proceedings of Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1984–1992.zhang2015accelerating X. Zhang, J. Zou, K. He, J. Sun, Accelerating very deep convolutional networks for classification and detection, IEEE Trans. Pattern Anal. Mach. Intell. 38 (10) (2016) 1943–1955.han2015learning S. Han, J. Pool, J. Tran, W. J. Dally, Learning both weights and connections for efficient neural network, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), 2015, pp. 1135–1143.guo2016dynamic Y. Guo, A. Yao, Y. Chen, Dynamic network surgery for efficient dnns, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), 2016, pp. 1379–1387.szegedy2015going C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. E. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich, Going deeper with convolutions, in: Proceedings of Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–9.He_2016_CVPR K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770–778.wu2014handwritten C. Wu, W. Fan, Y. He, J. Sun, S. Naoi, Handwritten character recognition by alternately trained relaxation convolutional neural network, in: Proceedings of International Conference on Frontiers in Handwriting Recognition (ICFHR), 2014, pp. 291–296.zhong2015high Z. Zhong, L. Jin, Z. Xie, High performance offline handwritten chinese character recognition using googlenet and directional feature maps, in: Proceedings of International Conference on Document Analysis and Recognition (ICDAR), 2015, pp. 846–850.zhou2015exploiting S. Zhou, J. Wu, Y. Wu, X. Zhou, Exploiting local structures with the kronecker layer in convolutional networks, CoRR abs/1512.09194.Simonyan2014VeryDC K. Simonyan, A. Zisserman, Very deep convolutional networks for large-scale image recognition, in: Proceedings of International Conference on Learning Representations (ICLR), 2014.krizhevsky2012imagenet A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Proceedings of Advances in Neural Information Processing Systems (NIPS), 2012, pp. 1106–1114.jia2014caffe Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, T. Darrell, Caffe: Convolutional architecture for fast feature embedding, in: Proceedings of International Conference on Multimedia (ICM), 2014, pp. 675–678.cong2014minimizing J. Cong, B. Xiao, Minimizing computation in convolutional neural networks, in: Proceedings of International Conference on Artificial Neural Networks (ICANN), 2014, pp. 281–290.mathieu2013fast M. Mathieu, M. Henaff, Y. LeCun, Fast training of convolutional networks through ffts, CoRR abs/1312.5851.lavin2015fast A. Lavin, S. Gray, Fast algorithms for convolutional neural networks, in: Proceedings of Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4013–4021.wu2016quantized J. Wu, C. Leng, Y. Wang, Q. Hu, J. Cheng, Quantized convolutional neural networks for mobile devices, in: Proceedings of Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4820–4828.Yanai2O16 K. Yanai, R. Tanno, K. Okamoto, Efficient mobile implementation of A cnn-based object recognition system, in: Proceedings of International Conference on Multimedia(ACM MM), 2016, pp. 362–366.chen2015compressing W. Chen, J. T. Wilson, S. Tyree, K. Q. Weinberger, Y. Chen, Compressing neural networks with the hashing trick, in: Proceedings of International Conference on Machine Learning (ICML), 2015, pp. 2285–2294.vanhoucke2011improving V. Vanhoucke, A. Senior, M. Z. Mao, Improving the speed of neural networks on cpus, in: NIPS Deep Learning and Unsupervised Feature Learning Workshop, Citeseer, 2011.courbariaux2016binarynet M. Courbariaux, Y. Bengio, Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1, CoRR abs/1602.02830.Lin2016TowardsCN S. Lin, R. Ji, X. Guo, X. Li, Towards convolutional neural networks compression via global error reconstruction, in: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 2016, pp. 1753–1759.Han2015DeepCC S. Han, H. Mao, W. J. Dally, Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding, in: Proceedings of International Conference on Learning Representations (ICLR), 2016.He2015DelvingDI K. He, X. Zhang, S. Ren, J. Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in: Proceedings of International Conference on Computer Vision (ICCV), 2015, pp. 1026–1034.Nair2010RectifiedLU V. Nair, G. E. Hinton, Rectified linear units improve restricted boltzmann machines, in: Proceedings of International Conference on Machine Learning (ICML), 2010, pp. 807–814.ioffe2015batch S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: Proceedings of International Conference on Machine Learning (ICML), 2015, pp. 448–456.srivastava2014dropout N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting, Journal of Machine Learning Research 15 (1) (2014) 1929–1958.iandola2016squeezenet F. N. Iandola, M. W. Moskewicz, K. Ashraf, S. Han, W. J. Dally, K. Keutzer, Squeezenet: Alexnet-level accuracy with 50x fewer parameters and < 0.5mb model size, CoRR abs/1602.07360.liu2011casia C. Liu, F. Yin, D. Wang, Q. Wang, CASIA online and offline chinese handwriting databases, in: Proceedings of International Conference on Document Analysis and Recognition (ICDAR), 2011, pp. 37–41.chen2015beyond L. Chen, S. Wang, W. Fan, J. Sun, S. Naoi, Beyond human recognition: A cnn-based framework for handwritten character recognition, in: Proceedings of Asian Conference on Pattern Recognition (ACPR), 2015, pp. 695–699.Graves2012Supervised A. Graves, Supervised Sequence Labelling with Recurrent Neural Networks, Vol. 385 of Studies in Computational Intelligence, Springer, 2012.zhang2016drawing X. Zhang, F. Yin, Y. Zhang, C. Liu, Y. Bengio, Drawing and recognizing chinese characters with recurrent neural network, CoRR abs/1606.06539.xie2016learning Z. Xie, Z. Sun, L. Jin, H. Ni, T. Lyons, Learning spatial-semantic context with fully convolutional recurrent network for online handwritten chinese text recognition, CoRR abs/1610.02616.
http://arxiv.org/abs/1702.07975v1
{ "authors": [ "Xuefeng Xiao", "Lianwen Jin", "Yafeng Yang", "Weixin Yang", "Jun Sun", "Tianhai Chang" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170226021320", "title": "Building Fast and Compact Convolutional Neural Networks for Offline Handwritten Chinese Character Recognition" }
|| ‖‖ ifstar* ifstar* prettyitem[1]An Efficient Pseudo-likelihood Method forSparse Binary Pairwise Markov Network Estimation Sinong Geng, Zhaobin Kuang, and David Page University of Wisconsinsgeng2@wisc.edu, zkuang@wisc.edu, page@biostat.wisc.edu================================================================================================================================= The pseudo-likelihood method <cit.> is one of the most popular algorithms for learning sparse binary pairwise Markov networks. In this paper, we formulate the L_1 regularized pseudo-likelihood problem as a sparse multiple logistic regression problem. In this way, many insights and optimization procedures for sparse logistic regression can be applied to the learning of discrete Markov networks. Specifically, we use the coordinate descent algorithm for generalized linear models with convex penalties <cit.>, combined with strong screening rules <cit.>, to solve the pseudo-likelihood problem with L_1 regularization. Therefore a substantial speedup without losing any accuracy can be achieved. Furthermore, this method is more stable than the node-wise logistic regression approach on unbalanced high-dimensional data when penalized by small regularization parameters. Thorough numerical experiments on simulated data and real world data demonstrate the advantages of the proposed method.§ INTRODUCTIONMarkov networks are a class of probabilistic graphical models that is widely applicable to many areas like image processing <cit.>, multiple testing <cit.> and computational biology <cit.>. In a Markov network, the conditional independence relationships among random variables are illuminated by the structure of the network. The most difficult challenge in estimating binary pairwise Markov networks (BPMNs) has always been dealing with the intractable computation related to log-likelihoods, which makes the learning process an NP-hard problem. Therefore, in the literature, various methods have been proposed to approximate the log-likelihood function instead of using exact estimation. <cit.> built a contrastive divergence algorithm by directly estimating the derivative of log-likelihood for a discrete Markov network. <cit.> considered the neighborhood recovery for each variable separately and proposed the node-wise logistic regression (NLR) method. In <cit.>, pseudo-likelihood (PL) was proposed as an approximation to the log-likelihood of a penalized BPMN. According to <cit.>, the PL method is one of the most competitive methods, with faster speed and higher accuracy than the other approaches. However, because of the development in the optimization for other competing methods, the PL method solved by an existing implementation, thepackage <cit.>, became the slowest among many learning methods for discrete Markov networks <cit.>; there still remains demand for a more efficient optimization procedure for PL.Meanwhile, L_1-regularized logistic regression (LR), as one of the most widely used generalized linear models (GLMs), has sophisticated implementations that can deliver solutions efficiently. The state-of-the-art implementation of L_1-regularized GLMs leverages coordinate descent <cit.> with variable screening <cit.>, and has become a building block for many other sparse learning problems like the ones in <cit.>, <cit.> and so on.As noticed by some researchers, there have been some potential similarities between the objective functions of PL and LR. <cit.> pointed out that the PL model is related to LR problems. <cit.> considered PL as an LR problem with symmetric constraints. <cit.> pointed out the equivalence of the objective functions of the two problems and provided an optimization algorithm based on this relationship. However the advantages of treating the sparse PL model as a regularized LR problem has been neither fully exploited nor emphasized enough. As a result, in a recent empirical study that compares multiple estimation mehtods for BPMNs <cit.>, optimization procedures with suboptimal efficiency are still considered and benchmarked to solve PL problems. By posing PL as an LR problem in the context of learning an L_1-regularized BPMN, a much faster alternative to the optimization of PL can be attained. Specifically our work in this paper is summarized as follows: * With the relationship between PL and LR, we consider an optimization procedure using the coordinate descent algorithm and variable screening to solve the L_1 regularized PL problem.The procedure in question can be conveniently implemented via the state-of-the-art optimization algorithm for learning sparse generalized linear models: . Thus, the procedure is called PLG (pseudo-likelihood using glmnet). Achieving a dramatic speedup without losing any accuracy, PLG substantially outperforms the highly visible implementation of PL <cit.>. * Unlike the NLR approach, the PLG procedure maintains the efficiency even when dealing with unbalanced high-dimensional data penalized by small regularization parameters. We provide insights and numerical experiments to demonstrate the superior stability of the PLG method. § BACKGROUND To motivate the optimization approach for PL, we first review the background knowledge about BPMNs. We consider a p-dimensional binary observation = (x_1, x_2, …, x_p)^⊤∈{0,1}^p. In a BPMN, the distribution ofis associated with a network with vertex set V = {1, 2, …, p} and edge set E ⊆ V × V. Accordingly, based on the ground truth parameter Θ^*, a p × p symmetric matrix given as: Θ^* =[ θ^*_11 θ^*_12… θ^*_1p; θ^*_21 θ^*_22… θ^*_2p;⋮⋮⋱⋮; θ^*_p1 θ^*_p2… θ^*_pp ]= [ θ^*_ij]_p × p,the joint probability mass functionis defined as:P_Θ^*() = exp(∑_s∈ Vθ^*_ss x_s + ∑_(s,t) ∈ Eθ^*_st x_s x_t - Ψ(Θ^*)),where Ψ(Θ), for any symmetric Θ = [ θ_ij]_p × p, denotes the log-partition function defined as:Ψ(Θ) = log[∑_∈{0,1}^p exp(∑_t⩾ s ⩾ 1θ_st x_s x_t ) ].In order to estimate (<ref>), people consider the L_1-regularized log-likelihood function for the BPMN:ℒ(Θ ;) = ∑^p_t⩾ s ⩾ 1θ_st (^⊤)_st - NΨ(Θ) - N λ/2∑_s ≠ tθ_st, where (^⊤)_st denotes the element in the s^th row and the t^th column of , given N independent and identically distributed samples = (_1, _2, …, _N)^⊤ = [ _ij]_N × p. The goal of dealing with (<ref>) is to estimate Θ^* with the optimal Θ given as: Θ_^* = _Θℒ(Θ,),which can be can be extremely challenging because of the intractable log-partition function, Ψ(Θ). Therefore, instead of maximizing (<ref>), the PL method considers the L_1-regularized pseudo-likelihood function <cit.>,ℒ̂(Θ;) =∑_n=1^N∑^p_s = 1[_ns( θ_ss + ∑_t≠ s_ntθ_st) - Ψ_s(_n, Θ) ] - N λ∑_t > sθ_st,as a replacement for the penalized log-likelihood function (<ref>) and solves for Θ̂_^* = _Θℒ̂(Θ,)to estimate Θ^*. Here, Ψ(Θ) is replaced by the much simpler Ψ_s(; Θ) = log[1 + exp(θ_ss + ∑_t ≠ s x_t θ_st)]. Compared with exact methods that solve (<ref>), the PL method that solves (<ref>) is shown to be more efficient without sacrificing too much accuracy <cit.>.There is also a connection between (<ref>) and the objective function of the NLR algorithm, which separately maximizes ∑_n=1^N [_ns( θ_ss + ∑_t≠ s_ntθ_st)-Ψ_s(_n, Θ) ]- N λ∑_t > sθ_st,for all s ∈ V. In fact, (<ref>) is just the sum of (<ref>) on s's. Specifically, <cit.> considered maximizing (<ref>) as an L_1-regularized LR problem with response y_s = (_1,s _2,s … _N,s)^⊤. § CONVERSION FROM A PSEUDO-LIKELIHOOD PROBLEM TO A SPARSE LOGISTIC REGRESSION PROBLEM We now demonstrate the relationship between the objective functions in PL and LR by transforming (<ref>) into a logistic loss function with parameter , design matrix , and response , which are defined subsequently.We first define parameter . Since in LR problems the parameter is a vector instead of a matrix like Θ, we redefine the parameter in BPMNs by stacking the upper triangular elements of Θ column by column to a vector and appending the diagonal elements to the end. Thus we have =[ upper triangular elementsθ_12, θ_13,θ_23, … , θ_(p-1)p, diagnoal elementsθ_11, …,θ_pp; ], where for any (s,t) ∈{(s,t)|s ≠ t,(s,t) ∈ V × V }, θ_st is transformed into the j_st^th element inwith j_st = min(s,t)+(max(s,t)-2)(max(s,t)-1)/2.Then for the definition of matrix = [ _ij]_Np × (m+p), with m = p(p-1)/2 denoting the number of upper triangular elements, we review the concept of the indicator function: 1( C ) =1Cis sataified0 otherwise .Furthermore, we define _ij = _n t ∃(s,t)s.t.j = j_st 1 (s=j-m) m+1 ⩽ j ⩽ m+p0otherwise,where i = N(s-1)+n, n ∈{1,2, …, N}. Finally, the responseis defined as:= (_11, _21, …, _N1, …, _1p, _2p,…, _N p)^⊤. With , , and , we can rewrite the first part (the log-likelihood) of (<ref>) as:^⊤ - ∑_n = 1^N∑_k=1^pΨ_k(_n, Θ)=^⊤ -∑_k=1^Nplog[1 + exp(()_k^⊤)],where()_k = (_k1, _k2, …, _k(m+p))^⊤,and∑_n = 1^N∑_k=1^pΨ_k(_n, Θ) = ∑_k=1^Nplog[1 + exp(()_k^⊤)]because of (<ref>). Therefore, (<ref>) is equal to^⊤ - ∑_k=1^Nplog[1 + exp(()_k^⊤)] -N λ∑_s ≠ tθ_st, which is exactly the loss function for a penalized LR problem consisting of Np samples with the design matrix , response , and the parameter . As a result, we have converted an L_1-regularized pseudo-likelihood problem into an LR problem with the objective function (<ref>). Based on the relationship established above, we can solve a spare PL problem by solving its equivalent sparse LR problem. The consequence is that we can take advantage of the sophisticated optimization procedures for sparse LR problems to compute the solution for a PL problem efficiently. We now consider the optimization procedure for sparse PL problems based on one of the most efficient optimization algorithms for penalized LR problems, the coordinate descent algorithm <cit.>. Furthermore, we also use the initialization procedure provided by the strong screening rule <cit.> for a further speedup. The details of PLG are presented in Algorithm <ref>. [algorithm]font=footnotesize § PLG VERSUS NLR WHEN LEARNING FROM UNBALANCED DATA In this section, we compare the performance of the PLG method with the NLR algorithm when dealing with unbalanced high-dimensional data under small regularization parameters.When the data are unbalanced, some columns ofwill be predominantly 0's or 1's. Since NLR uses each column ofin turn as a response, columns with predominantly 0's or 1's will also serve as responses in some spare LR models. Unfortunately, in this situation, we observe that NLR will be extremely slow and even may fail to converge. To make things worse, since the optimal solution gets denser with the decrease of the regularization parameter, the efficiency of NLR further deteriorates when penalized by small λ's.On the contrary, PLG is more stable in this situation. In the PLG method, the response is chosen to be a longer vectorin (<ref>), consisting of all the elements ininstead of just one column. Therefore, the unbalanced nature of one column incannot have a huge effect on . That is to say, while NLR is dealing with highly unbalanced responses, PLG is still solving an ordinary LR problem with a relatively balanced response, and thus maintains high efficiency even in the context of unbalanced high-dimensional data penalized by small regularization parameters. We will illustrate this phenomenon in detail in Section <ref> by experiments. § NUMERICAL EXPERIMENTSIn this section, we compare the empirical performance of the PLG algorithm with those of the original implementation of the PL method by(BMN) <cit.> and the NLR algorithm <cit.>. First, we show that, as a more efficient optimization method, PLG achieves the same objective function value as BMN with a much faster speed. We also apply NLR and PLG to unbalanced high-dimensional data for a deeper understanding of the relative efficiency of the two methods. Furthermore, we evaluate the structure learning performancesof the three methods usingreceiver operating characteristic (ROC) curves as our evaluation metric. ROC curves of the three methods are shown to be very similar on simulated datasets. Finally, performances of the three methods on real world data are also investigated using the senator voting record <cit.> dataset. Except BMN, the existing best implementation of PL methods, we are only comparing PLG with NLR, while many estimation methods for sparse BPMNs have been proposed in the literature <cit.>. We believe that the comparison between the performance of PLG and that of NLR is representative because NLR has been empirically shown to have the highest efficiency compared with other competing methods especially when dealing with high-dimensional data <cit.>. Furthermore, as shown in Section <ref>, NLR also uses a kind of pseudo-likelihood to approximate log-likelihood and thus has a similar objective function to that of PLG, making the contrast between PLG and NLR a natural comparison.§.§ Simulated Data GenerationWe use a procedure similar to that in<cit.> to generate the ground truth parameter Θ^* and the synthetic datasets. ** The number of the vertices, p, in the ground truth network, is chosen to be 5, 10, 15, 20 or 25. Each element of Θ is drawn randomly to be non-zero, with edge generation probability 𝐏∈{0.2, 0.3, 0.4, 0.5}. And the non-zero elements have a uniform distribution on [-1,1].* 1000 samples are generated by Gibbs sampling with 1000 burn-in steps.* The results reported from Section <ref> to Section <ref> are averages of 20 trials. §.§ Model Selection Before we proceed to compare the efficiency and accuracy of BMN, NLR, and PLG, we conduct model selection to find the best regularization parameters (best representatives) for the three methods in different networks. We use StARS, a stability-based regularization parameter selection method to high dimensional inference for undirected graphs <cit.>, to determine the λ that achieves the best balance between the edge selection stability and the network sparsity. In addition, since BMN and PLG have the same objective function, it's reasonable to use the same λ's for them and the λ's for NLR should be half of those in BMN <cit.>. In detail, for different networks and methods, the λ's selected are summaried in Table <ref>. §.§ Efficiency We now compare the efficiency among BMN, NLR and PLG. Using the λ's chosen by the model selection procedure in Section <ref>, we apply the three methods to datasets generated by BPMNs with different numbers of vertices and edge generation probability 𝐏s. The computation time of the learning process after the selection of λ's is reported in Figure <ref>. With the improvement on optimization, PLG outperforms BMN tremendously and becomes comparable to NLR in the aspect of efficiency. Furthermore, the advantages of NLR and PLG to BMN are more substantial with the scaling up of networks. This observation is consistent with the existing result that NLR performs better for high-dimensional problems <cit.>. In fact, the comparable performances between NLR and PLG are also reasonable, considering that they apply similar optimization approaches to similar objective functions. Naturally, we want to see whether PLG compromises accuracy for acceleration. To this end, we examine the difference between the parameters achieved by PLG (_P) and that achieved by BMN (_B). For a clear illustration of the difference, we define the relative difference ϵ between _P and _B asϵ = _P-_B_2/_B_2.The ϵ's for the solutions achieved by PLG and BMN in the experiments above are shown in Table (<ref>). It should be noticed that PLG achieves nearly the same parameters as BMN. In addition, the relative difference rises with the increase of p since we are using the same stopping criterion in different networks. These results suggest that PLG provides a notable improvement in efficiency to PL methods without losing any accuracy.§.§ Structure EstimationIn order to demonstrate that PLG also has the ability to estimate the correct structure of ground truth networks we compare ROC curves of the three methods for structure estimation to networks consisting of 15 vertices with edge generation probability 𝐏∈{ 0.2, 0.3, 0.4, 0.5} respectively in Figure (<ref>). Overall, we notice that all the three methods achieve nearly the same performance, and in some figures, the lines even overlap with each other, demonstrating the utility of PLG in structure estimation. This result is not surprising because of the coherence with existing empirical results in the literature <cit.>. §.§ Learning from Unbalanced Data As we mention above, with the PLG implementation, PL becomes comparable to NLR in efficiency and accuracy. In fact, PLG even outperforms NLR when dealing with unbalanced data under small λ's. Analysis on this particular situation is necessary because using the joint probability mass function (<ref>), it is very likely to generate extremely unbalanced data if θ_ss≠ 0 in a BPMN. In addition, although we only use simulated data to illustrate the stability of two methods for unbalanced data with small λ's, unbalanced data are ubiquitous in practical problems <cit.>, especially in researches on mutations of genes. Considering the huge amount of genes and the small probability of mutations, only a few mutated samples can be observed in practice. Accordingly, samples in this kind of problems are always unbalanced and thus it's meaningful to scrutinize the extreme situation in our work.To compare the stability of PLG and NLR with unbalanced data and small λ's, we set the number of the vertices, p=10 and compare the computation time under different λ's. In addition, we assume θ_1,1=5, when simulating samples to generate unbalanced data. We omit the results of BMN because of the low efficiency. Results are summarized in Figure <ref>.We notice that the efficiency of NLR decays immediately with small λ's and unbalanced data while our method still maintains a fast speed, as we expect in section <ref>. PLG performs similarly with NLR under large λ's, which is consistent with the results in <ref>. Theses results indicate that PLG is readily available for unbalanced data. §.§ Real World ExperimentsIn this section, performances of the three methods on real world data are investigated.We conduct an experiment using the senator voting record consisting of 279 samples and 100 variables in the second session of the 109th Congress<cit.>. The task of interest is to investigate the clustering effect of voting consistency. That is to say, we want to find the senators who are more likely to cast similar votes on bills. Here are the details of the experiment.** Each bill is considered as a sample and the votes from senators are features.* If a senator votes for one bill, the corresponding element in the sample will be denoted by 1, otherwise 0. Missing data are imputed as 0.* A binary pairwise Markov network is used to model the data. And we learn the network with PLG and BMN.* The λ selected by the model selection procedure in Section <ref> is 0.06.* The vertices represent senators and the edges denote the estimated θ_st,where s≠ t. Furthermore, only the edges with a positive estimated parameter are displayed.The visualization of the BPMN for senator voting data is presented in Figure <ref>. First, as would be expected, senators are divided into mainly a Democratic cluster and a Republican cluster, which is roughly consistent with party memberships of the senators. Second, as a Republican, Lincoln Chafee is closer and has more connection to Democrats. In fact, he joined the Democrats in 2008 <cit.> and our model can detect his democratic-leaning voting pattern. Third, since Ben Nelson is “one of the most conservative Democrats" <cit.>, it's not surprising that his voting record is closer to Republicans and has some connections with them. Furthermore, senators in the same parties and the same states tend to have more connections. These findings all coincide with conventional wisdom, suggesting that PLG can capture interesting dependencies in practical problems.We also contrast the efficiency of NLR and PLG on this real world data under different λ's in Figure <ref>. Again, the results of BMN are not included because of its long runtime. Similar to the results in Figure <ref>, the computation time is almost the same for the two methods, indicating the high efficiency of PLG for practical problems.§ CONCLUSIONFor the task of accelerating the optimization of PL, by the equivalence between the objective functions of PL and LR we have studied an optimization method for PL models in the context of BPMNs. Experimental results suggest that PLG is a viable candidate towards scalable and efficient learning of BPMNs even in extreme conditions. Although we focus on binary pairwise Markov networks, our method is generally applicable to other discrete Markov networks whose pseudo-likelihood functions have a close relationship to logistic loss functions. named
http://arxiv.org/abs/1702.08320v2
{ "authors": [ "Sinong Geng", "Zhaobin Kuang", "David Page" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170227151704", "title": "An Efficient Pseudo-likelihood Method for Sparse Binary Pairwise Markov Network Estimation" }
Network Resource Allocation via Stochastic Subgradient Descent: Convergence Rate Amrit S. Bedi, Student Member, IEEE and Ketan Rajawat, Member, IEEE Manuscript submitted December 30, 2023. This work was supported by the Indo-French Centre for the Promotion of Advanced Research-CEFIPRA. The authors are with the Department of Electrical Engineering, IIT Kanpur, Kanpur (UP), India 208016 (email: ). ================================================================================================================================================================================================================================================================================================================================== This paper considers a general stochastic resource allocation problem that arises widely in wireless networks, cognitive radio, networks, smart-grid communications, and cross-layer design. The problem formulation involves expectations with respect to a collection of random variables with unknown distributions, representing exogenous quantities such as channel gain, user density, or spectrum occupancy. We consider the constant step-size stochastic dual subgradient descent (SDSD) method that has been widely used for online resource allocation in networks. The problem is solved in dual domain which results in a primal resource allocation subproblem at each time instant. The goal here is to characterize the non-asymptotic behavior of such stochastic resource allocations in an almost sure sense. It is well known that with a step size of ϵ, SDSD converges to an 𝒪(ϵ)-sized neighborhood of the optimum. In practice however, there exists a trade-off between the rate of convergence and the choice of ϵ. This paper establishes a convergence rate result for the SDSD algorithm that precisely characterizes this trade-off. Towards this end, a novel stochastic bound on the gap between the objective function and the optimum is developed. The asymptotic behavior of the stochastic term is characterized in an almost sure sense, thereby generalizing the existing results for the stochastic subgradient methods. For the stochastic resource allocation problem at hand, the result explicates the rate with which the allocated resources become near-optimal. As an application, the power and user-allocation problem in device-to-device networks is formulated and solved using the SDSD algorithm. Further intuition on the rate results is obtained from the verification of the regularity conditions and accompanying simulation results. Stochastic subgradient, constant step-size, stochastic resource allocation, D2D communication. § INTRODUCTION Resource allocation is a fundamental problem in economic theory that finds application in the design of wireless communication protocols <cit.>, smart grid systems <cit.>, and scheduling algorithms <cit.>. From an optimization perspective, the goal is to find the optimal allocation variables such as transmit power, bandwidth, operational schedule, or facility locations, so as to maximize the user satisfaction, minimize the cost, and satisfy all system constraints. The stochastic resource allocation problem arises in scenarios where the optimization problem includes random parameters with unknown distributions <cit.>. For such problems, the goal is to find an allocation policy that is feasible and optimal, on average <cit.> or with high probability <cit.>.Since the policy variable may be infinite dimensional , the problem is more tractable in the dual domain due to finite number of constraints, an aspect exploited by a number of algorithms; see e.g., <cit.> and references therein. This paper focuses on the so-called online algorithms, where the allocation must occur every time the random parameter is realized and revealed. For each realization, the resource allocation adheres to the operational or “box” constraints, while the overall allocation policy is only asymptotically feasible and optimal. The dual problem may then be solved using the stochastic subgradient descent method, whose asymptotic behavior is well-known <cit.>. Further justification for solving the problem in the dual domain was provided in <cit.>, where it was shown that such stochastic problems have zero duality gap under some mild conditions. The asymptotic feasibility and optimality of the allocated resources via primal averaging was also established in <cit.>. In a similar vein, the relationship between the stochastic and 'averaged' dual iterates for the power and subcarrier allocation problem in OFDM was established in <cit.>. In resource allocation problems, it is possible for the environmental variables to change abruptly. This motivates the use of constant step sizes in stochastic algorithms, that obviate the need to restart the iterations whenever such a change occurs <cit.>. With a constant step size ofhowever, it is well known that the stochastic iterates converge only to an O()-sized neighborhood of the optimum <cit.>. On the other hand, makingarbitrarily small is also impractical, since it results in a slow convergence rate <cit.>. The aforementioned trade-off between the rate of convergence and the asymptotic performance of the constant step-size stochastic dual subgradient descent (SDSD) algorithm is an important aspect that has not been studied explicitly. The goal of this paper is to rigorously characterize the convergence rate of the SDSD algorithm in an almost sure sense. The key contribution of the paper is the development of stochastic bounds on the iterates produced by SDSD method, that explicate the role played byin ‘forgetting’ the initial conditions, and coming close to the optimum. To this end, the iterations are divided into epochs of duration 1/ϵ, and the optimality gap is analyzed for both fixed and arbitrarily small ϵ. The main result of the paper is that the stochastic component of this gap goes to zero almost surely, either as the number of epochs go to infinity with fixed ϵ > 0, or as ϵ itself goes to zero. The bounds developed here specialize to the known asymptotic results, and are generally applicable to any problem solved via the SDSD method. To the best of our knowledge, these are also the first such convergence rate results for the constant step size SDSD algorithm. Corresponding results for the diminishing step size stochastic subgradient exist, but cannot be readily extended to the present case <cit.>. Likewise, the analysis in <cit.> can be extended to yield rate results that hold on average, but does not yield almost sure bounds. The analysis in the present work makes use of the strong law of large numbers directly, and is completely different from that in <cit.>. As the second contribution, it is shown that the convergence rate results are readily applicable to the stochastic resource allocation problem of interest here. To further demonstrate the usefulness of the bounds, the paper details a contemporary application that uses mobile caching for improving service via device-to-device (D2D) communication <cit.>. To this end, we consider the D2D edge caching framework where willing users offer data connectivity to highly mobile users experiencing spotty coverage.The problem is well-motivated for vehicular users who may download data from other users residing near the highway. The corresponding resource allocation problem is shown to satisfy the required regularity conditions, thereby demonstrating the flexibility afforded by the SDSD algorithm. This paper is organized as follows. Sec. <ref> lists some of the related work in this area, providing context to the current work.Sec. <ref> starts with detailing the D2D edge caching problem and formulates the general network resource allocation problem. Sec. <ref> discusses the various solution methodologies in the literature, including the stochastic subgradient descent (SSD) framework. Sec. <ref> provides the main results of the paper, stating the convergence rate results for both primal and dual problems. Sec. <ref> further develops the D2D examples introduced in Sec. <ref>, and verifies the different conditions required for the convergence results to hold. Simulation results on D2D example are provided in Sec. <ref> and Sec. <ref> concludes the paper. The notation used here is as follows. Boldface letters denote column vectors, for which the inequalities and equalities are defined component-wise. The set of all real numbers is denoted by , and likewise the sets of non-negative reals, positive reals, and K-dimensional real vectors are denoted by _+, _++, and ^K, respectively. Time indices are denoted by the subscripts t and τ. For a vector , []_i denotes its i-th entry,denotes its ℓ_2 norm, _p denotes its p-th norm, for p ∈_+, and ^T denotes the transposed row vector. The expectation operator is denoted by · and the inner product is denoted by ·,·. Finally, [c]^a_b = min(max{c,b},a) and [c]_+:=[c]_0^∞. §.§ Related work Stochastic approximation algorithms have a long history, going back to the prototypical adaptive filtering algorithms by Robbins and Monro <cit.> and Widrow and Stearns <cit.>, and have been studied extensively in the context of least mean square (LMS) and recursive least-squares (RLS) algorithms <cit.>. Stochastic gradient and subgradient methods have since been applied to neural networks <cit.>, parameter tracking <cit.>, large-scale machine learning <cit.>, and resource allocation problems <cit.>. Convergence of these algorithms is well known for various choices of the step-size parameter <cit.>. Convergence rate of the stochastic subgradient descent algorithm has been established for diminishing step size rules via non-asymptotic analysis <cit.>. However, not much is known about the convergence rate of the constant step-size counterpart, except for the fact that it exhibits linear convergence when far from the optimum, if the objective function is strongly convex <cit.>. The rate analysis presented here fills this gap for aclass of convex problems that satisfy certain regularity conditions; see Sec. <ref>. The use of dual subgradient algorithms for deterministic resource allocation was first popularized in <cit.>. Recovery of near-optimum allocations via primal averaging was proposed in <cit.>, and the result was extended to stochastic resource allocation problems in <cit.>. Thecorresponding convergence rate analysis for primal recovery was provided in <cit.>, which also serves as a starting point for the analysis presented here. Note however that the extension of the rate results to stochastic problems is not trivial, and does not follow immediately from the result in <cit.>. The specific assumptions required to develop the bounds in this paper are inspired from those used in the context of stochastic approximation and averaging <cit.>. From a broader perspective, the work in this paper is also related to the backpressure algorithm, first proposed in the context of stochastic network optimization <cit.>. As shown in <cit.>, the dual subgradient algorithm when applied to deterministic resource allocation problems, may also be viewed as the so-called drift-plus-penalty algorithm. The analysis in <cit.> however does not translate to convergence rate results for the SDSD algorithm. The wireless caching framework utilizing D2D communications was first proposed in <cit.>, where the fundamental limits were analyzed. The system model described here builds upon the basic framework of <cit.> by formulating the problem within the resource allocation fabric, and adding some implementation details. The results presented here may also be applied to other stochastic resource allocation formulations, such as those in broadcast power allocation<cit.>,OFDM <cit.>, beamforming <cit.>, cognitive radio networks <cit.>, network utility maximization <cit.>, demand-response in the smart grid <cit.>, smart grid powerded green communications <cit.> and energy harvesting <cit.>.§ PROBLEM FORMULATIONThis section formulates the general stochastic network resource allocation problem. We begin with detailing a D2D caching example that is used to motivate the general problem. §.§ Motivation: D2D Mobile Caching The D2D framework enables direct communication between nearby user equipments (UE), enabling greater spectrum utilization, higher energy efficiency, and increased overall throughput. The technology also allows unique solutions to connectivity problems that arise at the network edge or as a result of cellular congestion at overcrowded events. As an example,the D2D architecture proposed in <cit.> considers caching of popular content on mobile devices with excess storage. The content files are then available for download over a D2D link, allowing users to reach higher data rates, avoid congestion, and overcome coverage issues. By directly involving the smartphone equipped users into the process of content distribution, such an edge-caching solution not only cuts down the hardware provisioning costs but also promises better user experience. This example builds upon the mobile caching framework proposed in <cit.>. Specifically, the mobile user equipment (MoUE) seeks to download a large file or stream media for a sufficiently long duration, while maintaining a reasonable download rate or quality of experience. Let ℳ = {1, …, M} be the set of mobile caches in the network and at time t, the requested chunk be available at ℳ_t ⊂ℳ unique mobile caches (devices) that are at close proximity to the user. The potential download rate R_i(p_t^i, γ_t^i) depends on the power allocation p_t^i at the i-th mobile cache, as well as on the channel gain γ_t^i of the D2D links. The downloads also incur a cost c_t^i∈_++ per unit of transmit power p_t^i for slot t. The costs could be in form of incentives provided to the mobile caches by the content delivery network (CDN) company, and/or directly charged to the MoUE in form of an “enhanced coverage” fee. At each time t, the MoUE selects a cache i_t to download from, and obtains an average throughput of r over time. Finally, the user satisfaction for the achieved average throughput r is quantified through the concave utility function U(r). Fig. <ref> depicts an example scenario, where a MoUE connects to different UEs in order to download cached data, that would otherwise be available only from the base stations. The resulting stochastic resource allocation problem is formulated as max_r,{p^i}   U(r) - ∑_i∈ℳ_t c_t^ip_t^i s. t. r≤∑_i∈ℳ_tR_i(p_t^i,γ_t^i){p^i}_i∈ℳ∈, r_min≤ r ≤ r_max where the expectations are with respect to the random variables _t :=(ℳ_t, {c^i_t}_i∈ℳ_t, {γ^i_t}_i∈ℳ_t). The optimization variables consist of the power allocation function p^i and the rate variable r. The formulation of(<ref>) follows the classical “utility minus penalty” maximization format common to network resource allocation problems <cit.>. The set of functionsis such that only one out of {p_t^i}_i∈ℳ is non-zero for each t (cf. Sec. <ref>). Consequently, the summations in (<ref>) and (<ref>)consist of only one term for each t. The setalso specifies the maximum and minimum values of p_t^i for each t. Finally, the constraint in (<ref>) ensures that the power allocated per-time slot is sufficient to satisfy the average rate requirement. The specific form of the rate function depends on the wireless technology used by the users. For instance, under slow fading scenarios, the power allocation and user selection can occur every coherence interval. Since channel state information can be acquired easily, the users may employ adaptive modulation and coding in order to achieve a rate close to the ergodic capacity. Specifically, for the mobile device i, the potential transmission rate is of the form R_i(p_t^i,γ_t^i) : = Wlog_2 (1 + p_t^iγ_t^i/α) where W is the bandwidth of the channel and α includes the effect of noise and interference as well as other impairments, such as the use of finite block length codes and imperfect channel state information at the transmitter. More realistically, under fast fading scenarios, the power allocation must occur over intervals that are significantly longer than the coherence time. In this case, it is more reasonable to consider the average rate over several coherence intervals as R_i(p_t^i,γ_t^i):= Wlog_2(1+p_t^iγ_t^ih_i/α) where h_i is the small-scale fading gain <cit.>. In this case, the power allocation and user selection occur only on the basis of the average channel gain γ_t^i, which changes slowly. It is remarked that the system model considered here allows other forms of the rate function as well. §.§ Stochastic Resource Allocation This section considers the more general stochastic resource allocation problem where the formulation involves expectations with respect to a collection of q random variables with unknown distributions, denoted by ∈^q. Of particular interest are the problems arising in the context of wireless communications and networks, wherecaptures the state of the system, and theformulation takes the form <cit.>(^⋆, ^⋆)=f_0()s. t.(̆) + (̌,_)≥ 0, ∈𝒳, ∈.The optimization variables incomprise of the resource allocation variable ∈^d and the policy functions :^q →^p. The objective function f_0: ^d → is a concave function, while the set𝒳 is compact and convex. The vector-valued constraint function is defined as (̆):=[u_1() ⋯ u_K()]^T, where {u_i():^d →}_i=1^K are concave functions.In contrast, no such restriction is placed on the vector-valued function :̌^q ×^p →^K and the compact set of functions . The rate analysis in Sec. <ref> however does require the overall problem to satisfy certain regularity properties, such as Slater's constraint qualification and differentiability of the subgradient error; see (A1)-(A4). It can be seen that the D2D edge caching problem in (<ref>) is a special case of (<ref>)-(<ref>). Introducing a scalar variable z ∈, it is possible to write (<ref>) equivalently as max_r,{p^i}   U(r) +zs. t.-z-∑_i∈ℳ_t c_t^ip_t^i≥ 0 -r+∑_i∈ℳ_tR_i(p_t^i,γ_t^i)≥ 0{p^i}_i∈ℳ∈, r_min≤ r ≤ r_max Comparing (<ref>) with (<ref>)-(<ref>), we see that = [r, z]^T and _t :=(ℳ_t, {c^i_t}_i∈ℳ_t, {γ^i_t}_i∈ℳ_t). Likewise the forms of vector functions $̆ and$̌ can be readily inferred. Since the distribution ofis also not known in advance, it is generally not possible to solvein an offline manner. The goal here is to solvein an online fashion by observing the realizations of the independent identically distributed (i.i.d.) process {_t}_t∈_0, where _0 is the set of non-negative integers. For most problems of interest, such a framework also entails online allocation of resources for each time t. To this end, the class of algorithms considered here will output the sequence of vector pairs {_t, _t} for each t, for the purpose of allocating resources. For the sake of brevity, we will subsequently denote policy function _t:= __t and _t(,_t):=(̆) + (̌_t,__t), so that (<ref>) can equivalently be written as _t(,_t)≥ 0. Here, it is understood that the expectation is with respect to the random vector _t. Having introduced the problem at hand, we detail an example formulation in the context of D2D mobile caching. § SOLUTION VIA DUAL DESCENT This section details the SDSD algorithm for solvingin an online fashion. To this end, the basic assumptions are first stated (Sec. <ref>), followed by the SDSD algorithm (Sec. <ref>), and a discussion of the known results (Sec. <ref>). §.§ Basic assumptions The following assumptions are commonly utilized by different dual algorithms proposed in the literature. None of these assumptions are too restrictive, and they can be easily verifiedfor most resource allocation problems of interest. A1.Non-atomic probability density function:The random variable _t has a non-atomic probability density function (pdf). A2.Slater's condition:There exists strictly feasible (,), i.e., (,_t)> 0. A3.Bounded subgradients:The function (·,·) takes bounded values, i.e., there exists a constant G < ∞ such that_t(·,·)≤ G for all t∈ℕ_0. In (A1), for _t to have a non-atomic pdf, it should not have any point masses or delta functions. Note that this requirement is not restrictive for a number of applications arising in wireless communications; see e.g. <cit.>. The Slater's condition is also not restrictive, since a strictly feasible resource allocation can often be found for most real-world problems; see Sec. <ref>, <cit.> for examples. Finally, the bound in (A3) also holds for most resource allocation problems wherethe functions (·,·) represent natural quantities such as instantaneous rate (cf. (<ref>)), indicator function for channel outage <cit.>, or household power consumption <cit.>. Having introduced the basic assumptions, we are ready to state the SDSD algorithm. §.§ The stochastic dual subgradient algorithm Towards solving , consider the more tractable dual formulation, which has a finite number of optimization variables. Introducing a dual variable ∈^K_+ corresponding to the constraint (<ref>), the Lagrangian is given by L(,,)= f_0() + ,(,_t) where the constraints in (<ref>) are kept implicit. The dual function is obtained by maximizing L(,,) subject to (<ref>), that is, g() = max_∈, ∈L(,,). Finally, the dual problem ofis given by 𝖣 = min_≥ 0   g(). In general, sincemay be non-convex, it holds that 𝖣≥𝖯. It was shown in <cit.> however, that under (A1)-(A3), it holds that 𝖯 = 𝖣. The proof utilizes the Lyapunov convexity theorem, and holds even if at least one element of _t has an absolutely continuous cumulative distribution function (cdf) <cit.>. It is remarked that Lyapunov convexity has previously yielded similar results in control theory <cit.>, economics <cit.>, and wireless communications <cit.>. The result on zero duality gap legitimizes the dual descent approach, since the dual problem is always convex, and the resultant dual solution can be used for primal recovery. To this end, similar problems in various contexts have been solved via the classical dual descent algorithm <cit.>, wherein the primal updates utilize various sampling techniques. This paper considers the ergodic stochastic optimization (ESO) algorithm proposed in <cit.> for a similar problem[The ESO algorithm is a stochastic dual subgradient descent algorithm applied to a resource allocation problem in <cit.>.]. Applied to , the ESO algorithm starts with an arbitrary _0, and utilizes the following iterations for t ∈ℕ_0, (_t(),_t())= _∈, ∈Π_t f_0() + ,(,) _t+1 = [ - ϵ(_t(),_t())]_+. Here, Π_t :={__t∈^p | ∈} is the set of all legitimate values of the vector __t anddenotes all feasible vectors _t(). The ESO algorithm is motivated from the fact that for any ∈^K_+, (_t(),_t()) is a stochastic subgradient of the dual function g(). Consequently, the updates in (<ref>) amount to solving (<ref>) via the SSD algorithm with a constant step-size. The use of a constant step size is motivated from classical short memory adaptive algorithms such as the least mean squares algorithm. As stated earlier, the constant step-size algorithms can even handle abrupt changes in the problem parameters, without being restarted. This paper considers the projected variant of the SSD algorithm for dual updates. Specifically, the updates in (<ref>) are projected on to a compact set ℒ⊂_+^K, and take the following form _t+1 = 𝒫_ℒ( - ϵ(_t(),_t())) where for any ∈^K, [𝒫_ℒ()]_i =0 []_i < 0λ_max[]_i > λ_max λ_i 0 ≤ []_i ≤λ_max for 1≤ i≤ K. In other words, large values of []_i are truncated to λ_max, where λ_max≫_∞. Such a modification is already applicable to any practical implementation of the SSD algorithm, where _t is not allowed to take arbitrarily large values. Sinceis not known in advance, a bound on _∞ is derived in Appendix <ref> using (A2). Consequently, the following rule can be used for choosing λ_max in practice: λ_max≫1/χ(,)(g() - f_0()) ≥(g() - f_0())/G where ∈^K_+, (, ) is a strictly feasible solution to(cf. (A2)), G is the subgradient bound (cf. A3) and χ(,) := min_1≤ k≤ K[(,_t)]_k. In general, the quantity χ(,) may be calculated empirically. However, for many problems of interest, a bound on λ_max may arise naturally (cf. Sec. VI). The projected SSD proposed in (<ref>) ensures that the iterates _t stay bounded for all t ∈ℕ_0. The boundedness condition is required for carrying out the rate analysis in Sec. <ref>. §.§ Known results The asymptotic properties of the SDSD algorithm with constant step-size are well-known <cit.>. Asymptotic convergence results for the ESO algorithm, applied to slightly different resource allocation problem, were established in <cit.>. The results in <cit.> can readily be extended tosolved via projected SDSD, and take following form: lim_t→∞1/t∑_τ=0^t-1_τ(_τ(_τ),_τ(_τ)) ≥ 0 a. s.lim_t→∞ f(_t) ≥𝖯-(ϵG̅^̅2̅/2)a. s. where the running average _t is defined as _t = 1/t∑_τ=0^t-1_τ(_τ) and G̅^̅2̅ is the bound on (_t(_t), _t(_t))^2. An important feature of the stochastic algorithm is that the primal updates in (<ref>) can be used for allocating resources in real-time. Further, such allocations will be asymptotically feasible and near-optimal for almost every realization of the random process {_t}_t∈_0. § CONVERGENCE RATE RESULTSThis section develops various results regarding the rate of convergence of the SDSD algorithm. In contrast to the asymptotic results in (<ref>), the goal here is to quantify the rate at which the allocations specified by (<ref>) become optimal. Such results are of practical significance to the protocol designers, since they can be used to estimate the number of iterations required for the primal and dual objectives to be near-optimal. In the case of the constant step-size SDSD, the convergence rate also depends on the step-size parameter ϵ. For instance, it is well-known that the choice ϵ→ 0, motivated from the result in (<ref>), leads to slow convergence in all constant step-size (sub-)gradient descent algorithms. The results presented here provide a precise characterization of the trade-off between ϵ and the convergence rate for the SDSD algorithm. As in <cit.>, the results in this section make use of the strong law of large numbers, and thus hold for almost every realization of the i.i.d. process {_t}_t∈_0. It is emphasized that the analysis presented here is quite different from the standard convergence analysis carried out for SSD algorithm and its variants <cit.>. It is also different from the non-asymptotic analysis for the case of diminishing step-size SSD algorithms, that only applies to ensemble averages <cit.>. Furthermore, the rate results presented in <cit.> apply only to the unconstrained stochastic subgradient algorithm, and cannot be extended to constrained problems (cf. ) or to the projected subgradient algorithm (cf. (<ref>)). The results are first developed for the general SSD algorithm (Sec. <ref>), and subsequently specialized to the resource allocation problem at hand (Sec. <ref>). §.§ Convergence rate for the SSD algorithmThis section considers the generic optimization problem ^⋆ = min_∈Λ g()where Λ⊂^K is a closed, compact, and convex set, and max_∈Λ-^⋆≤Λ_max < ∞. Similar to (<ref>), the optimum function value is denoted by 𝖣 = g(^⋆). Given ∈Λ, let() ∈∂ g() be a subgradient of g(). Similar to , let ():=__t() for all t∈_0 be the corresponding stochastic subgradients that depend on the i.i.d. process {_t}_t∈_0 and satisfy () = () for any ∈Λ. For instance, in the simplest case, the stochastic gradient could be of the form ()=()+ζ_t, where ζ_t is a zero mean i.i.d. random variable. The optimization problem (<ref>) is solved via the projected SSD algorithm. _t+1 =- ϵ()where · denotes the projection operation. The algorithm is initialized with an arbitrary _0 ∈Λ such that B_0:=_0-^⋆ < ∞. Next, we make certain assumptions specific to (<ref>). To this end, define the stochastic error _t():= _t()-(), and observe that for any ∈Λ, the sequence {_t()} is also i.i.d. A3^'.Bounded subgradients:There exists constant G < ∞ such that _t()≤ G for all ∈Λ. A4.Continuously differentiable error:The error function _t() is continuously differentiable on Λ, and the gradient with respect tosatisfies ∇__t() < G_e < ∞. The requirement for bounded subgradient in (A3^') is analogous to that in (A3) for . Here, (A3^') is stated separately because the problem in (<ref>) is more general than the dual of (<ref>)-(<ref>). In practice, applying the results of this section to (<ref>) entails substituting _t(λ)=_t((),_t(), which makes (A3') the same as (A3). The error function _t() may not always be continuous or differentiable for the problem at hand, and the same must be verified explicitly. It is emphasized that (A4) need only be checked for _t() and not for _t(), which is still allowed to be non-differentiable; see Sec. <ref>. As an example, consider the class of problems where the convex objective function takes the form g() =ℓ_t() + r(), where ℓ_t() is a twice-differentiable loss function that depends on the `data index' t, and r() is a possibly non-differentiable regularizer. For such problems, the error function becomes _t() = ∇_ℓ_t() -∇_ℓ_t(), which is clearly differentiable. Further, the `loss-plus-regularizer' problem structure is quite general, and includes well-known formulations such as LASSO <cit.> and nuclear norm regularized matrix least squares <cit.>. Specifically, given regressands {y_t} and regressors {_t}, the objective function in the LASSO formulation takes the form ∑_t (y_t -_t,) +_1 and thus adheres to (A4). The first result is regarding the objective function values obtained from (<ref>), and holds for all T. Under (A1)-(A4) and for T = n/ϵ, the minimum dual function value is bounded as min_0≤τ≤ T-1 g (_τ) ≤1/T∑_τ = 0^T-1 g(_τ)≤𝖣 + B_0/2n + ϵ G^2/2 + C_T(n,ϵ) where the random variable C_T(n,ϵ) holds for ζ > 0, ϵ^ζ-1/2 C_T(n,ϵ) → 0 a.s. as ϵ→ 0for fixed n < ∞n^1/2-ζ C_T(n,ϵ) → 0a.s. asn →∞ for fixed ϵ > 0 . ℙ(C_t(n,ϵ) > ν^ζ-1/2) < Aexp(-ν^2ζ) where ν :=max{1/ϵ,n} and A < ∞ is a constant that does not depend on n or ϵ. It is remarked that since g(·) is convex, the bound in (<ref>) also holds for g(_t), where _t : = 1/t∑_τ = 0^t-1_τ is the running average of the iterates. Theorem 1 characterizes the manner in which the minimum objective function value approaches 𝖣 for large t. Of the three terms in this optimality gap, the first one depends on the initialization and decays as Ø(1/n). The second term depends on the subgradient bound, and decays linearly with the step-size ϵ. Finally, the third term is random, and decays almost surely as Ø((ϵ/n)^1/2-ζ) for any ζ >0 (cf. (<ref>)). Alternatively, the probability of the third term being non-zero decays exponentially as either n →∞ or ϵ→ 0 (cf. (<ref>)). Indeed, for a given run of (<ref>) with a fixed ϵ, the probability of the third term being non-negligible starts to decrease only beyond n > 1/ϵ or equivalently, T > 1/ϵ^2. Further intuition on the convergence rate can be obtained by considering the two cases in(<ref>). When ϵ > 0 is fixed, it can be seen that the asymptotic results in <cit.> follow directly from Theorem <ref> as n →∞. That is, while the initial condition is “forgotten” for t ≫ 1/ϵ, the optimality gap does not necessarily approach zero, but is eventually bounded by ϵ G^2/2. At the same time, the fluctuations due to the stochastic term subside exponentially fast; see (<ref>). On the other extreme, consider the case when n is kept fixed, while the algorithm is run for different values of ϵ. For the scenarios when ϵ is arbitrarily small, the asymptotic optimality gap is clearly negligible. However, for such small step-sizes, the algorithm takes a long time to forget the initial conditions, since the first term decays only as Ø(1/ϵ t). Consequently, for all runs when ϵ is taken to be small, the algorithm will appear to converge slowly. Likewise, the probability of the stochastic term being non-negligible starts to decrease exponentially only for T > 1/ϵ^2 (cf. (<ref>)). It is remarked that such a trade-off also applies to the classical subgradient method <cit.>, and the result in Theorem 1 can be viewed as its stochastic counterpart. It is remarked that the results in <cit.> can be readily obtained by taking expectation on both sides of (<ref>) since we have that C_t(n,ϵ)=0. Observe further that unlike the results in <cit.> that hold on an average, the almost sure results in (<ref>) cannot be specified in terms of problem parameters alone. Indeed, while it holds that C_t(n,) ≤ 2G^2(Λ_max+1), such a bound is not very useful in the present case, as compared to the stronger convergence rate result in (<ref>). Finally, it is remarked that it may be possible to minimize the bound in (<ref>) to the extent possible, by fixing T and choosing a corresponding step size. In the present case, given T, the bound is the smallest when ϵ = 1/√(T) which yields the following resultmin_0≤τ≤ T-1 g (_τ) ≤1/T∑_τ = 0^T-1g(_τ)≤𝖣 + 𝒪(1/√(T))+ C_Twhere the random variable C_T = 𝒪(T^-1/4) almost surely. The result in Theorem 1 may therefore be seen as the generalization of the results in <cit.> that have also reported an 𝒪(T^-1/2) bound on average but have not analyzed the almost sure behavior. It is emphasized however that in practice, minimizing the bound may not necessarily translate to an improved convergence rate. Moreover, the number of iterations T for which the algorithm runs may not necessarily be known in advance, e.g., in target tracking applications. Instead, it may be simpler to specify a fixed value of ϵ, and continue to run the algorithm till the contribution of the 𝒪(1/n) term becomes tolerably small. Before proceeding with the proof of Theorem <ref>, an intermediate lemma establishing rate results on various time-averages is provided. The proof of Theorem <ref> will subsequently utilize these results by expressing the optimality gap in (<ref>) in terms of these time-averages. Let :={t_1, t_2, …, t_} be a set of natural numbers such that t_i ≠ t_j. Then for any T ≥ and , ' ∈Λ, it holds under (A3^')-(A4) that 1/T∑_t ∈_t() ≤ L^1_T() 1/T∑_t ∈_t() - _t(') ≤ L^2_T()-' where, for a given ζ > 0,the random variables {L^i_T()}_i=1,2 satisfy T^1/2-ζL^i_T()→ 0a.s. asT →∞ ℙ(L^i_T() > T^ζ-1/2)< A_i exp(-T^2ζ) where the constant A_i < ∞ does not depend on T. Observe that the i.i.d. process {_t()}_t ∈ is zero-mean and satisfies _t()≤_t() + ()≤ 2G where the last inequality holds from (A4). Therefore, it follows from the strong law of large numbers that for any T ≥, 1/T∑_t∈_t() →0 almost surely as T →∞. It can also be seen that the same holds for L_T^1() := 1/T∑_t∈_t(). The rate results in (<ref>) hold as consequences of the strong law of large numbers for i.i.d. sequences with bounded moments; see <cit.> for (<ref>). Finally, (<ref>), follows from the Bernstein inequalityapplied to i.i.d. zero-mean and bounded random vectors {e_t()} <cit.>. Denote the j-th entry of _t() by e^j_t() for 1 ≤ j ≤ K. From (A3^')-(A4), we have that e^j_t() is bounded and continuously differentiable on Λ. Consider arbitrary ≠'∈Λ, and observe that since Λ is convex, it holds for any β∈ [0,1] that _β:=β+ (1-β)' ∈Λ. It is now possible to use the mean-value theorem, which guarantees that there exists some β_j ∈ [0,1], such that e^j_t() - e^j_t(') = ⟨∇ e^j_t(_β_j), -' ⟩. Here, ∇^j_t(_β_j) is an i.i.d. random variable that is also zero-mean, since for continuously differentiable and bounded functions(cf. (A4)), we have that ∇^j_t(_β_j) = ∇e^j_t(_β_j) = 0. Taking summation in (<ref>) and stacking the K components, it follows for any T ≥, that 1/T∑_t∈_t() - _t(') = _T(,')(-') where the K × K matrix _T(,') is defined as [_T(,')]_jk : = 1/T∑_t∈[∇ e^j_t(_β_j)]_k, where the subscript is used to denote the k-th element of vector ∇ e^j_t(_β_j). Applying the Cauchy-Schwarz inequality to (<ref>), we obtain 1/T∑_t∈_t() - _t(')≤_T(,')-'. From the strong law of large numbers, we have that [_T(,')]_jk→ 0 almost surely as T →∞ for all 1≤ j,k≤ K. It can be seen that the same also holds for L^2_T():=_T(,'). Finally, the rate results in (<ref>) follow from <cit.>^2 and the Bernstein inequalityapplied to i.i.d. zero-mean and bounded random variables {[∇ e_t^j(_β_j)]_k} <cit.>. The proof of Theorem <ref> follows in two steps: the derivation of the overall form required in (<ref>), presented next; and the analysis of the random term C_t(n,ϵ) deferred to Appendix <ref>. [Proof of Theorem <ref>] In order to derive the bound in (<ref>), recall that since g() is convex, we have that, g()≤ g() + (), - t ∈ℕ_0. Letting g_t := g(), it follows from the non-expansive property of · that δ_t+1 := _t+1-^2 =- μ()-^2 ≤- - μ()^2 =-^2 - 2μ(),- + μ^2()^2 ≤δ_t - 2μ(),- + μ^2G^2- 2μ()- (),-≤δ_t - 2μ(g_t-𝖣) -2μ()-(),-+ μ^2G^2 where (<ref>) follows from (A3') and (<ref>) follows from (<ref>). Rearranging(<ref>) yields 2μ(g_t-𝖣) ≤ (δ_t -δ_t+1)-2μ()-(),-+ μ^2G^2 Taking sum over τ = 0, 1, …, t-1 and noting that B_0=δ_0 and that μ t ≥ n, yields 1/t∑_τ = 0^t-1g_τ ≤𝖣+δ_0 -δ_t/2μ t-1/ t∑_τ = 0^t-1_τ(_τ)-(_τ),_τ-+μ G^2/2≤𝖣 +B_0/2n+μ G^2/2+ C_t(n,ϵ) where the last inequality follows since δ_t≥ 0 and the stochastic term in (<ref>) is defined as C_t(n,ϵ) := 1/ t∑_τ = 0^t-1_τ(_τ)-(_τ),_τ-. Since (<ref>) is of the same form as required in (<ref>), it remains to show that C_t(n,ϵ) converges in the sense of (<ref>)-(<ref>). The convergence analysis for C_t(n,ϵ) makes use of the bounds developed in Lemma <ref> and is deferred to Appendix <ref>. It is remarked that the results in Theorem <ref> can likely be generalized to the case when Λ is not necessarily compact. Such a generalization is likely possible because the strong law of large numbers, as well as the rate results in <cit.> and <cit.> only require the random process to have bounded moments. Nevertheless, the requirement that _t-≤Λ_max < ∞ is not too restrictive, and greatly simplifies the analysis. §.§ Convergence rate for the SDSD algorithm In order to apply the results developed in Sec. <ref> to the dual problem (<ref>), observe that the stochastic subgradient of g() for any ∈^K_+ is given by ()= (_t(),_t())_t():= _∈ f_0() + ,(̆) _t():= _∈Π_t,(̌_t,). With () as defined in (<ref>), the projected SSD updates take the same form as (<ref>), with Λ_max = 2√(K)λ_max. Further the bound required in (A3^') follows from (A3). Therefore, Theorem <ref> applies as is to the dual objective function under (A3) and (A4). For the resource allocation problem however, the behavior of the primal objective function is more important. The subsequent theorem characterizes the primal near-optimality when the running average of {_t(_t)} is used for allocating resources. For the purpose of rate analysis, time is divided into epochs of duration 1/ϵ each, and the result is expressed in terms of ϵ and n. Under (A1)-(A4), and for n/ϵ≤ t < (n+1)/ϵ, the average primal objective function is near optimal in the following sense: f_0(_t) ≥1/t∑_τ = 0^t-1f_0(_t) ≥𝖯 - R_0/2n - ϵ G^2/2 - C'_t(n,ϵ) where R_0:=_0^2, _t = 1/t∑_τ = 0^t-1_τ, and the random variable C'_t(n,ϵ) is such that for ζ > 0, ϵ^ζ-1/2C'_t(n,ϵ)→ 0a.s. as ϵ→ 0for fixed n < ∞n^1/2-ζC'_t(n,ϵ)→ 0 a.s. asn →∞ for fixed ϵ > 0 ℙ(C_t(n,ϵ) > ν^ζ-1/2) < Aexp(-ν^2ζ) where ν :=max{1/ϵ,n} and A < ∞ is a constant that does not depend on n or ϵ. The termC'_t(n,ϵ) in Theorem <ref> is very similar to C_t(n,ϵ) in Theorem <ref>, and therefore decays at the same rate. It follows from Theorem <ref> that the resource allocation yielded by the projected SDSD algorithm is near optimal since the average primal objective value is close to 𝖯. Similar to (<ref>), the bound in (<ref>) also holds for max_0 ≤τ≤ t-1 f_0(_τ), as well as for f_0(_t). Further, the optimality gap in (<ref>) is also similar to the one in (<ref>), and therefore decays at the same rate. For details, see the discussion after the statement of Theorem <ref>. In order to prove Theorem <ref>, the specific form of the bound in (<ref>) is first established. The rest of the proof is much the same as before, and results from Lemma <ref> are again used to derive the bounds on C'_t(n,ϵ) as in the proof of Theorem <ref>. [Proof of Theorem <ref>] Recall that the subgradient of g() is given by () = (_t(), _t()), so that g()=f_0(()) + ,(). Since f_0 is concave, the following inequalities hold: f_0(_t)≥1/t∑_τ = 0^t-1 f_0(_τ)= 1/t∑_τ = 0^t-1(f_0(_τ) + _τ,(_τ)) - 1/t∑_τ = 0^t-1_τ,(_τ) = 1/t∑_τ = 0^t-1 g(_τ) - 1/t∑_τ = 0^t-1_τ,(_τ)≥ g() - 1/t∑_τ = 0^t-1_τ,(_τ). Next, the second term in (<ref>) can be bounded as _t+1^2=- μ()^2 ≤ - μ()^2 ≤^2 - 2μ,()+μ^2()^2 -2μ()-(), ⇒ 2μ,() ≤_t^2 - _t+1^2 + μ^2G^2 -2μ()-(), where, (<ref>) follows from the non-expansiveness property of the projection operator and from the fact that 0∈Λ (_t+1=_t+1-0), and (<ref>) follows from (A3). Taking sum over τ = 0, 1, …, t-1 and dividing by 2ϵ t yields 1/t∑_τ = 0^t-1_τ,(_τ) ≤_0^2/2μ t-_t+1^2/2μ t+ μ G^2/2 -1/t∑_τ = 0^t-1_τ(_τ)-(_τ),≤_0/2n + ϵ G^2/2 + C'_t(n,ϵ) where, C'_t(n,ϵ) := 1/ t∑_τ = 0^t-1_τ(_τ)-(_τ),_τ. The bound in (<ref>) follows by plugging back (<ref>) into (<ref>). The analysis for C'_t(n,ϵ) is much the same as in the proof of Theorem <ref>. The only difference for this case is that the iterate bound becomes _t≤√(K)λ_max from (<ref>). Consequently, after rearranging various terms in C'_t(n,ϵ) and using the triangle inequality in (<ref>), (<ref>), and (<ref>), all occurrences of Λ_max get replaced with √(K)λ_max. Since this is equivalent to redefining the constant Λ_max appropriately, the required rate results continue to hold. § APPLICATION TO D2D COMMUNICATIONS This section details some implementation aspects of the SDSD algorithm in the context of D2D communication problem considered in this paper under slow and fast fading scenarios. The Assumptions (A1)-(A4) are also verified for the problems at hand so as to ensure that Theorems 1 and 2 hold. Before proceeding, the SDSD algorithm for the general form of the D2D problem (<ref>) is detailed. Specifically, the Lagrangian is given by L(r,{p^i}_i∈ℳ,λ)=U(r)- ∑_i∈ℳ_tc_t^i p^i_t+λ∑_i∈ℳ_tR_i(p^i_t,γ^i_t)-r which yields the following stochastic algorithm. Since the Lagrangian is separable in r and p^i_t, starting with arbitrary λ_0, the primal iterates at time slot t become: r_t(λ_t)∈_r_min≤ r≤ r_maxU(r)-λ_t r {p_t^i(λ_t)}_i∈ℳ_t ∈_{p^i}_i∈ℳ_t∈Π_t∑_i∈ℳ_t[λ_t R_i(p^i,γ^i_t)-c_t^ip^i]. At the end of each time slot, the dual variable is updated as λ_t+1=λ_t-ϵ[∑_i∈ℳ_t R_i(p^i_t(λ_t),γ^i_t)-r_t(λ_t)] Recall that the set of functions 𝒫 is such that only one user, denoted by i_t:=max_i∈ℳ_t p^i_t(λ_t),is allocated non-zero power at time slot t. Therefore, the dual variable is updated as λ_t+1 =λ_t-ϵ[R_i_t(p^i_t_t(λ_t),γ^i_t_t)-r_t(λ_t)] The full algorithm is summarized in Algorithm <ref>. The rate analysis developed in Sec. <ref> applies to the present problem under the following assumptions. B1.Continuous random variables:The random variables _t = (ℳ_t, {c^i_t}_i∈ℳ_t, {γ^i_t}_i∈ℳ_t) are i.i.d., have continuous cdfs, and finite supports, i.e., ℳ_t⊂ℳ, γ_t^i ∈ [γ_min, γ_max], and c^i_t ∈ [c_min , c_max] for each i∈ℳ_t. B2.Power constraints: The set 𝒫 := {:^3M→^M |_∈Π_}, where for any _t, we have that Π__t := Π_t={_t∈^M| p_t^j = 0  j ∉ℳ_t, _t_0 = 1, p^i_t_tc_t^i_t∈ [C_min, C_max] }, where i_t := max_i∈ℳ_t p^i_t and p^i_t = [_t]_i. B3.High SNR:It is assumed that γ^i_t ≫ 1 for all i∈ℳ_t. The finite support of the random quantities is again motivated from practical considerations. The set 𝒫 also includes limits on the maximum affordable cost C_max and the minimum operational cost or minimum allowable transaction amount C_min. A maximum power constraint of the form p^i_t ≤ P_max may also be included within 𝒫. However, for the present application, it is assumed that the caches are not energy constrained, so that P_max≫ C_max/c^i_t for all i∈ℳ_t. In other words, the user's cost constraint is much more stringent than the cache's energy constraint. Finally, the high SNR assumption is justified if there are always enough mobile caches available at all slots. In a typical setting, the MoUE may “see” hundreds of advertisements from potential mobile cache servers, but may choose to consider only tens of users with which control messages may be exchanged easily. Next, the discussion for slow and fast fading cases will be carried out. §.§ Slow Fading Recall that under slow fading, since power allocation occurs every coherence interval, we have for high SNR (cf. (B3)), that R_i(p_t^i(λ_t),γ_t^i) :≈ Wlog_2 (p_t^i(λ_t)γ_t^i/α). The primal iterate in (<ref>) can be found in two steps. First the optimum transmit power for all potential users is determined, i.e., for each i∈ℳ_t, p̂_t^i(λ_t)=_p^i λ_tWlog_2 (p^iγ_t^i/α)-c_t^ip^i s. t.     C_min≤ c_t^ip^i≤ C_max= [Wλ_t/c_t^i]_C_min/c_t^i^C_max/c_t^i The winning user is the one that maximizes the objective function, i.e., i_t=_i∈ℳ_t[λ_t R_i(p̂^i_t,γ^i_t)-c_t^ip̂^i_t]= _i∈ℳ_tγ^i_t/c^i_t where the expression in (<ref>) derived in Appendix <ref>. An implication of (<ref>) is that the random variable i_t is i.i.d. Finally, it holds that p^j_t = p̂^i_t_t(λ_t) for j = i_t and zero otherwise. Similarly,r_t is calculated as r_t(λ_t) =[1/λ_t]_r_min^r_max resulting in the dual update λ_t+1=λ_t-ϵ[Wlog_2(p^i_t_t(λ_t)γ^i_t_t/α)-r_t(λ_t)]. An additional assumption regarding the parameter values is made in the slow fading case: B4.   Strict feasibility:The problem parameters satisfy r_min < max_i log_2(C_maxγ_t^i/c^i_t). The strict feasibility condition is required for ensuring the existence of a Slater point. Since it holds that γ_t^i/c^i_t ≥γ_min/c_max, it is possible to satisfy (B4) by keeping r_min sufficiently small and/or if γ_min is sufficiently large. Having stated the algorithm and all required assumptions, the following Lemma summarizes the main result of this subsection. Under (B1)-(B4), the iterates obtained from (<ref>)-(<ref>) adhere to the rate bounds stated in Theorems 1 and 2. For the results in Theorem 1 and 2 to apply, it suffices to verify that assumptions (A1)-(A4) are satisfied under the slow fading case. The random variable _t has a non-atomic pdf since the channel gains γ^i_t have a continuous cdf, thus confirming (A1); see also <cit.>. The Slater's condition is met by choosing r̃ = r_min and p̃^i_t_t = C_max/c_t^i_t where i_t is given in (<ref>) and zero for all j ≠ i_t. For such a choice, it holds from (B4) that r̃ < log_2(p̃^i_t_tγ^i_t_t), which is the required condition for strict feasibility. For a given λ, the subgradient function is given by f_t(λ)= Wlog_2(p^i_t_t(λ)γ^i_t_t/α)-r_t(λ) where i_t and p^i_t_t are evaluated as in (<ref>) and (<ref>). A bound on the subgradient (cf. (A3)) may therefore be found as f_t(λ)≤ Wlog_2(C_maxγ_max/α c_min) + r_max =: G. Next, in order to verify (A4), the expression for the stochastic subgradient error e_t(λ):=f_t(λ)-f_t(λ) is first derived. Recalling that i_t = _i γ^i_t/c^i_t, consider the following three cases, * When λ < C_min/W, it holds that p^i_t_t = C_min/c^i_t_t, implying that e_t(λ)= W log_2(C_minγ^i_t_t/α c^i_t_t) - [1/λ]_r_min^r_max - W log_2(C_minγ^i_t_t/α c^i_t_t) - [1/λ]_r_min^r_max= W log_2(γ^i_t_t/c^i_t_t) - W log_2(γ^i_t_t/c^i_t_t).where the expectations are with respect to _t. * When C_min/W ≤λ≤ C_max/W, it holds that p^i_t_t = Wλγ^i_t_t/c^i_t_t, implying that e_t(λ)= W log_2(Wλγ^i_t_t/c^i_t_t) - [1/λ]_r_min^r_max- W log_2(Wλγ^i_t_t/c^i_t_t) - [1/λ]_r_min^r_max= W log_2(γ^i_t_t/c^i_t_t) - W log_2(γ^i_t_t/c^i_t_t). * Similarly, when λ > C_max/W, it holds that p^i_t_t = C_max/c^i_t_t, implying that e_t(λ)= W log_2(γ^i_t_t/c^i_t_t) - W log_2(γ^i_t_t/c^i_t_t). Therefore, the subgradient error is a zero-mean random variable that does not depend on λ, and is therefore trivially continuously differentiable in λ. §.§ Fast Fading In the more realistic fast fading case, the power allocation and downloads occur over several coherence intervals.Under the high SNR assumption, the rate becomes , R_i(p^i_t,γ^i_t) ≈ Wlog_2(p_t^iγ_t^i/α)+Wψ_i where ψ_i = log_2(h_i) for a given user i <cit.>. As in the slow fading case, the primal iterates are again found in two steps. First, the power allocation for a potential user i is found, p̂_t^i(λ_t)= [Wλ_t/c_t^i]_C_min/c_t^i^C_max/c_t^i. It is shown in Appendix <ref> that the winning user for the fast fading case can be written as i_t= _i∈ℳ_tlog_2(γ^i_t/c^i_t) + ψ_i. Finally, since r_t(λ_t) = max{min{1/λ_t, r_max},r_min} as before, the dual update is given by λ_t+1=λ_t-ϵ[Wlog_2(p^i_t_t(λ_t)γ^i_t_t/α) + Wψ_i_t-r_t(λ_t)]. In order to apply the rate bounds in Theorems 1 and 2, we again assume (B1)-(B3), and make the following assumption analogous to (B4). B5.   Strict feasibility:The problem parameters satisfy r_min < max_i {log_2(C_maxγ_t^i/c^i_t) + ψ_i}. As in the slow fading case, (B5) allows us to obtain a Slater point, as required by (A2). The following Lemma summarizes the result for the fast fading case. Under (B1)-(B3) and (B5), the iterates obtained from (<ref>)-(<ref>) adhere to the rate bounds stated in Theorems 1 and 2. The As in Lemma <ref>, it suffices to verify assumptions (A1)-(A4). The random variable _t has a non-atomic pdf as remarked earlier. Similarly, it can be verified that the Slater point is given by r̃ = r_min and p̃^i_t_t = C_max/c_t^i_t where i_t is as given in (<ref>), and zero for all j ≠ i_t. The subgradient bound required in (A3) now becomes, f_t(λ)≤ Wlog_2(C_maxγ_max/α c_min) + Wψ_max + r_max =: G where ψ_max := max_iψ_i. Finally, in order to verify (A4), we proceed as in the proof of Lemma <ref> and derive an expression for the subgradient error e_t(λ):=f_t(λ)-f_t(λ). Since expression for the allocated power is the same for the two cases, it can be seen that for the fast fading case as well e_t(λ) = W log_2(γ^i_t_t/c^i_t_t) - W log_2(γ^i_t_t/c^i_t_t) where i_t is found as in (<ref>). Since e_t(λ) does not depend on λ, (A4) also holds trivially in the fast fading scenario. § NUMERICAL TESTS This section describes the numerical tests on the D2D example discussed in Sec. <ref>. The convergence rate of the SDSD algorithm is studied for the fast fading scenario depicted in Fig. 2. For the simulations, we consider M=25 operational UEs. At each time slot, the MoUE receives advertisements from a random subset ℳ_t of 5 to 25 UEs. Without loss of generality, downloading from the i-th UE incurs a cost of c^i_t = i per unit of transmit power. The lower and upper limits for each transaction are set as C_min= 1 and C_max= 25, respectively. The average channel gains γ^i_t are assumed to be Rayleigh distributed with γ_min=0.1 and γ_max=65, and for simplicity, the parameters α, and ψ_i are all set to unity. In order to keep the numbers realistic, we set W=1 MHz. Finally, in order to ensure Slater's condition, we set r_min = 0.2 and r_max = 10. In realistic scenarios, since the optimal rate is expected to be greater than r_min, it follows from the definition of r_t(λ_t) in Sec.V-Athat λ^⋆ > 1/r_min. Therefore it is safe to takeλ_max≫ 1/r_min. Fig. <ref> shows the evolution of the utility function calculated using running averagesr̅_t := 1/t∑_τ = 0^t-1 r_τ of the allocated rate with iterations. As expected from Theorem <ref>, the utility function converges to a value that is closer to the optimal when ϵ is small. Similarly, Fig. <ref> shows the evolution of the dual objective function, which again converges to a point closer to the optimal when ϵ is small. Observe from the results that for ϵ = 0.1, the oscillations continue even as number of iterations go to infinity, as implied by Theorem <ref>. These oscillations are allowed due to the presence of an Ø() term on the right-hand side of (17), and are well-documented for the constant step size stochastic subgradient type algorithms <cit.>. The convergence rate result of Theorem <ref> is further illustrated in Fig. <ref>(a) and Fig. <ref>(b), where the deterministic terms in (<ref>) are not included. The stochastic term C_t(n,ϵ) is calculated from (<ref>) and plotted against both ϵ and n. It can be seen from both plots that C_t(n,ϵ) → 0 as either n →∞ or ϵ→ 0, as claimed in Theorem <ref>. Having studied the convergence properties of the SDSD algorithm, we now focus on some of the nuances of the edge-caching formulation in (<ref>). To begin with, the performance of the proposed scheme is compared against that obtained from two naive algorithms: random and opportunistic. The maximum transmit power for the three cases is scaled so as to ensure equal aggregate power consumption. As the name suggests, an MoUE following the random scheme selects an available UE randomly and without paying any attention to the channel or the cost of the UE. The data is transmitted at the maximum power so as to ensure the maximum rate. As evident from Table <ref>, such a scheme is able to obtain a higher download rate but also at a significantly higher cost. In contrast, the opportunistic scheme advocates a parsimonious approach wherein the MoUE always selects an available UE with the lowest cost. Subsequently, the UE transmits with the maximum power but ultimately achieves a lower aggregate download rate, due to suboptimal channel conditions.Fig. <ref> provides results from the perspective of the UEs and is generated by running the same algorithm for 1000 independent identically distributed MoUEs. In particular, if an MoUE follows the optimal policy determined by (<ref>), the UEs may be interested in knowing a reasonable price value. As expected, it is clear from Fig. <ref>, that the UEs that charge more are selected less often have lower data usage. Consequently, the aggregate revenue of the UEs with the lowest charges is also the highest. More interestingly however, such high-priced UEs have a very high revenue per Mb of data served. The intuition here is that UEs with high costs are only selected when their channel gains are proportionally higher than the others. Therefore, all transmissions to such UEs occur at higher rates and correspondingly lower power. In summary, by operating only under favorable channel conditions, the high-priced UEs extract a greater revenue for every bit that they serve. Note however that the revenue appears to saturate, and increases very slowly for very high prices. § CONCLUSIONS This paper considers a general stochastic resource allocation problem and solved using constant step-size stochastic subgradient descent algorithm in an online manner. A stochastic bound on the gap between the objective function and the optimum is developed and analyzed in an almost sure sense, generalizing the existing results. The bounds characterize the precise manner in which the optimality gap behaves for fixed and arbitrarily small step-sizes. The convergence rate analysis is also extended to a class of stochastic resource allocation problems that utilize stochastic dual subgradient descent (SDSD) iterations. Existing results on near-optimality of the primal average objective function are again generalized for convergence rate analysis. As an example, a resource allocation problem is formulated in the context of mobile caching in device-to-device communications, and solved via SDSD. The regularity conditions required for the rate analysis are verified, and numerical tests are provided, further substantiating the convergence rate results. § A BOUND ON _∞ From (A2), there exists some ∈𝒳 and ∈, such that (, _t) > 0, where recall that _t:=__t for all t∈_0 and the expectation is with respect to _t. Given ∈^K_+, define the sublevel set _:={∈^K_+| g()≤ g()}, and observe that for any ∈_, it holds that g()≥ g()=max_∈, ∈ f_0() + ,(,_t)≥ f_0() + ,(,_t). Rearranging the expression in (<ref>), we obtain ∑_k=1^K[]_k[(,_t)]_k ≤ g()-f_0() ⇒_∞≤∑_k=1^K[]_k≤g()-f_0()/χ(,) where χ(,):=min_1≤ k≤ K[(,_t)]_k. Observe that _={∈^K_+| g() ≤𝖣}, so that it follows from (<ref>) that _∞≤𝖣-f_0()/χ(,) Finally, since g() ≥𝖣 for all ∈^K_+, the bound in (<ref>) can be relaxed to yield (<ref>). § ASYMPTOTIC PROPERTIES OF C_TIn order to study the convergence rate of C_T, the time is divided into epochs of duration 1/ϵ, so that there exists some n≥0 that satisfies n/ϵ≤ T < (n+1)/ϵ. Since n:= ⌊ϵ t ⌋, where ⌊·⌋ denotes the floor operation, is an arbitrary number, such a split allows the value of t to be increased by keeping either n or ϵ fixed, and varying the other. It is therefore possible to separately study the effects of choosing larger n or smaller ϵ values. For instance, if ϵ = 0.1, the time is divided into epochs of duration 10 iterations each. Hence, the zeroth epoch consists of iterations 0 ≤ t ≤ 9,the first epoch consists of iterations 10 ≤ t < 19, and so on. With such a split, the classical asymptotic analysis for t →∞ is equivalent to fixing ϵ and letting n →∞. Additionally, the proposed split allows us to study the case when n is fixed, but the algorithm is run with different values of ϵ. This proof is devoted to the analysis ofC_t(n,ϵ), and relies on rearranging the terms in (<ref>) so that the results in developed in Lemma <ref> can be applied. The proof is divided into two parts, one for each mode of convergence in (<ref>). Fixed n < ∞ and μ→ 0: For this case, C_t(n,ϵ) is split into summands corresponding to each epoch till time t, that is, C_t(n,ϵ) = 1/μ t∑_m=0^n C^m(μ) where, C^m(μ) :=μ∑_τ = ℓ_m^u_m_τ(_τ)-(_τ),_τ-. The limits in the summation are defined as ℓ_m := m/μ and u_m := (m+1)/μ-1 for m < n while u_n:=t-1. Next, define for all ∈Λ and ℓ_m≤τ≤ u_m z_τ() = μ∑_ι = ℓ_m^τ_ι() - (),-. Substituting (<ref>) in (<ref>), we obtain C^m(μ) = z_u_m(_u_m+1) - ∑_τ = ℓ_m^u_m(z_τ(_τ+1) - z_τ(_τ)). Such a split allows us to use (<ref>) in order to bound the magnitude of each term separately. Specifically, letting _m := {ℓ_m ,ℓ_m+1, …, u_m}, z_u_m(_u_m+1) = μ∑_ι = ℓ_m^u_m_ι(_u_m+1) - (_u_m+1),_u_m+1-≤μ∑_ι = ℓ_m^u_m(_ι(_u_m+1) - (_u_m+1))_u_m+1-≤ L^1_1/μ(_m)Λ_max. Similarly, denoting '_τ := {ℓ_m,ℓ_m+1, …, τ} for all ℓ_m ≤τ≤ u_m, it holds from using triangle inequality and (<ref>), that z_τ(_τ+1)-z_τ(_τ)=ϵ|∑_ι= ℓ_m^τ_ι(_τ + 1) - (_τ+1),_τ+1--_ι(_τ) - (_τ),_τ-| ≤ϵ|∑_ι= ℓ_m^τ⟨_ι(_τ + 1) -(_τ+1) -_ι(_τ) +(_τ), (_τ+1-)⟩|+ μ∑_ι = ℓ_m^τ_ι(_τ) - (_τ),_τ+1 - _τ≤ϵ∑_ι= ℓ_m^τ_ι(_τ + 1) -(_τ+1) -_ι(_τ)+(_τ)_τ+1-+ϵ∑_ι = ℓ_m^τ(_ι(_τ) - (_τ))_τ+1 - _τ≤(L^2_1/ϵ('_τ)Λ_max + L^1_1/ϵ('_τ))_τ+1-_τ≤ϵ(L^2_1/ϵ('_τ)Λ_max + L^1_1/ϵ('_τ))G where the (<ref>) uses the non-expansive property of the projection operator · and the boundedness of the stochastic subgradients (cf. (A3^')). Substituting (<ref>) and (<ref>) into the expression for C^m(ϵ) yields the following bound C^m(ϵ) ≤ L^1_1/μ(_m)Λ_max + ϵ G∑_τ = ℓ_m^u_m L^2_1/ϵ('_τ)Λ_max + L^1_1/ϵ('_τ) ≤ L^1_1/μ(_m)Λ_max+ Gsup_ℓ_m ≤τ≤ u_m (L^2_1/ϵ('_τ)Λ_max + L^1_1/ϵ('_τ)) . Finally, the bound for C_t(n,ϵ) becomes C_t(n,ϵ)≤1/n∑_m=0^nC^m(ϵ)≤sup_0≤m ≤ nC^m(ϵ)≤sup_0≤ m ≤ n L^1_1/ϵ(_m)Λ_max+ Gsup_0≤τ < t (L^2_1/ϵ('_τ)Λ_max + L^1_1/ϵ('_τ)) Therefore, the rate result from Lemma <ref> implies that ϵ^ζ-1/2C_t(n,ϵ)≤Λ_maxsup_0≤ m ≤ nϵ^ζ-1/2L^1_1/ϵ(_m)+ Gsup_0≤τ < t (ϵ^ζ-1/2L^2_1/ϵ('_τ)Λ_max + ϵ^ζ-1/2L^1_1/ϵ('_τ)) which goes to zero almost surely as ϵ→ 0, yielding the bound in (<ref>). Likewise, let A_1m < ∞ be the constant associated with the bounds for Λ_maxL_1/ϵ^1(_m), as necessitated by Lemma <ref>. Then, using the union bound, it follows that ℙ(sup_0≤ m ≤ n L_1/ϵ^1(_m) > ϵ^1/2-ζ)≤∑_m=0^n A_1mexp(-ϵ^-2ζ) ≤ A_1exp(-ϵ^-2ζ) where A_1 := ∑_m A_1m. Along the same lines, the result in Lemma <ref> and the subsequent use of the union bound imply that there exist a constant A_2 < ∞ such that the probability of the second term in (<ref>) exceeds ϵ^1/2-ζ is bounded by A_1exp(-ϵ^-2ζ). Combining the two bounds, and again using union bound, we have that ℙ(C_t(n,ϵ) > ϵ^1/2-ζ) ≤ A_3exp(-ϵ^-2ζ) where A_3 = A_1+A_2. Fixed ϵ > 0 and n →∞: In this case, C_t(n,ϵ) must now be split into two terms as follows, C_t(n,ϵ)≤ϵC(n) + D(n)/n where, C(n):=∑_τ = ℓ_m^n/μ-1_τ(_τ)-(_τ),_τ-= ∑_τ = 0^1/μ-11/n∑_m=0^n-1_m/μ + τ(_m/μ + τ)-(_m/μ+τ),_m/μ+τ- D(n):= μ∑_τ = n/μ^t-1_τ(_τ)-(_τ),_τ-. For this analysis, it is assumed without loss of generality that 1/ϵ is an integer. That way, the subscripts m/ϵ+ τ are also integers and the floor operation is not required. Given ϵ, note that D(n) is a sum of a fixed number of bounded random variables, so that D(n)/n → 0 surely as n →∞. In order to bound C(n), define for all ∈Λ and 0≤τ≤ 1/μ-1, z_τ() = ∑_ι = 0^τ1/n∑_m=0^n-1_m/ϵ+τ() - (),-. Then, it follows that C(n) = z_n/μ-1(_n/μ) - ∑_τ = 0^1/μ-1(z_τ(_τ+1) - z_τ(_τ)). It is now possible to bound each term in (<ref>) separately. Defining ^τ:={m/μ + τ}_m=0^n-1, and using (<ref>), it follows that z_n/μ-1(_n/μ) = ∑_τ = 0^1/μ-11/n∑_m=0^n-1_m/μ+τ(_n/μ) - (_n/μ),_n/μ-≤∑_τ = 0^1/μ-11/n∑_m=0^n-1(_m/μ+τ(_n/μ) - (_n/μ))_n/μ-≤Λ_max∑_τ = 0^1/μ-1L^1_n(^τ). Proceeding similarly, z_τ(_τ+1)-z_τ(_τ) =∑_ι = 0^τ|1/n∑_m= 0^n-1_m/μ+ι(_τ + 1) - (_τ+1),_τ+1- -_m/μ+ι(_τ) - (_τ),_τ-|≤∑_ι = 0^τ||1/n∑_m= 0^n-1(_m/μ+ι(_τ + 1) - (_τ+1)- _m/μ+ι(_τ) + (_τ))||_τ+1-+∑_ι = 0^τ1/n∑_m = 0^n-1(_m/μ+ι(_τ) - (_τ))_τ+1 - _τ≤∑_ι = 0^τ(L^2_n(^ι)Λ_max + L^1_n(^ι))_τ+1-_τ≤ϵ G ∑_ι = 0^τ(L^2_n(^ι)Λ_max + L^1_n(^ι)). Finally,substituting (<ref>) and (<ref>) into the expression for C(n), and noting that 1/ϵ is a fixed number, the following bound is obtained ϵC(n) ≤Λ_maxϵ∑_τ = 0^1/μ-1L^1_n(^τ) + ϵ^2∑_τ = 0^1/ϵ-1 G ∑_ι = 0^τ(L^2_n(^ι)Λ_max + L^1_n(^ι)) ≤Λ_maxsup_0≤τ < 1/ϵ L^2_n(^τ)+ Gsup_0≤τ < 1/ϵsup_0≤ι≤τ(L^2_n(^ι)Λ_max+ L^1_n(^ι)) which goes to zero almost surely as n →∞, implying that C_t(n,ϵ) → 0 almost surely as n →∞. Both the rate results can again be inferred as in the previous case. Indeed, similar to (<ref>), given ζ>0, there exist A_4<∞ such that ℙ(C_t(n,ϵ) > n^ζ-1/2) ≤ A_4exp(-n^2ζ). Combining with (<ref>), the probability bounds can be written as ℙ(C_t(n,ϵ) > ϵ^1/2-ζ) ≤ Aexp(-ϵ^-2ζ)andℙ(C_t(n,ϵ) > n^ζ-1/2) ≤ Aexp(-n^2ζ) where A = max{A_3,A_4}. The required result follows by choosing ν = max{1/ϵ,n} in (<ref>). § DERIVATION OF (<REF>) AND (<REF>) Consider first the slow fading case, where the winning user is given by i_t =_i∈ℳ_tλ_t (Wlog_2(p̂^i_tγ^i_t/α))-c_t^ip̂^i_t where p̂^i_t is given by (<ref>). Thus, the objective function in (<ref>) for a given λ can be written as T^i_t(λ)=λ Wlog_2(C_minγ^i_t/c^i_t)-C_min,λ≤C_min/W λ Wlog_2(C_maxγ^i_t/c^i_t)-C_max,if λ≥C_max/W λ Wlog_2(λ W/αγ^i_t/c^i_t)-λ W,otherwise. Since log_2 is monotonic function, observe in (<ref>) that in all three cases, T^i_t(λ) depends monotonically on γ^i_t/c^i_t for all λ > 0. This allows us to conclude that i_t = _i∈ℳ_t T^i_t(λ)= _i∈ℳ_tγ^i_t/c^i_t which is the required identity in (<ref>). Similarly for the fast fading case, the objective function for the winning user in (<ref>) is given by T^i_t(λ)=λ Wlog_2(C_minγ^i_t/c^i_t) +λ Wψ_i-C_min,if λ≤C_min/W λ Wlog_2(C_maxγ^i_t/c^i_t) + λ Wψ_i-C_max,if λ≥C_max/W λ Wlog_2(λ W/αγ^i_t/c^i_t) + λ Wψ_i-λ W,otherwise which, for λ > 0, again depends monotonically on log_2(γ^i_t/c^i_t) + ψ_i. The expression in (<ref>) therefore follows. IEEEtran
http://arxiv.org/abs/1702.08054v3
{ "authors": [ "Amrit Singh Bedi", "Ketan Rajawat" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170226171223", "title": "Network Resource Allocation via Stochastic Subgradient Descent: Convergence Rate" }
School of Mathematics, University of Bristol, University Walk, Bristol BS8 1TWnumbersThe Robot Crawler Model on Complete k-Partite and Erdős-Rényi Random Graphs A. Davidson email:and A. Ganesh email: =========================================================================== Web crawlers are used by internet search engines to gather information about the web graph. In this paper we investigate a simple process which models such software by walking around the vertices of a graph. Once initial random vertex weights have been assigned, the robot crawler traverses the graph deterministically following a greedy algorithm, always visiting the neighbour of least weight and then updating this weight to be the highest overall. We consider the maximum, minimum and average number of steps taken by the crawler to visit every vertex of firstly, complete k-partite graphs and secondly, sparse Erdős-Rényi random graphs. Our work follows on from a paper of Bonato et. al. who introduced the model. § INTRODUCTIONUsing an analogy introduced by Messinger and Nowakowsk <cit.>, heuristically the robot crawler model can be viewed as a robot cleaning the nodes of a graph according to a greedy algorithm. Upon arriving at a given vertex the robot “cleans” the vertex, and then moves to its “dirtiest” neighbour to continue the process. Crawlers are of practical use in gathering information used by internet search engines, (<cit.>, <cit.>, <cit.>). This particular version of the model was introduced by Bonato et. al. <cit.> and we direct the reader to their paper for further insight into the problem's motivation and previous work done. There they considered the robot crawler performed on trees, complete k-partite graphs (with equal sized vertex classes), Erdős-Rényi random graphs and the preferential attachment model. The purpose of this paper is to offer an answer to open problems 1 and 2 posed there which relate to generalising their work concerning complete k-partite graphs and Erdős-Rényi random graphs.The model introduced by Messinger and Nowakowsk <cit.> is analogous to the robot crawler model, but the robot cleans edges, (which are weighted), rather than vertices. Models similar to those studied by Messinger and Nowakowsk <cit.> were investigated by Berenbrink, Cooper and Friedetzky <cit.> and Orenshtein and Shinkar <cit.> who considered a class of random walks on graphs which prefer unused edges, although in their models the walker chooses independently among adjacent edges when they have all previously been traversed. Given a finite connected undirected simple graph G = G(V,E) we fix from outset an initial weighting; a bijective function w_0:V →{ -n, -n+1 ..., -1 } indicating the initial ranking of how dirty the vertices are. Here and henceforth “dirtiest”/“cleanest” refers to the vertex with the lowest/highest weight in a given set. At time 1 the robot visits the “dirtiest” node in V, i.e. w_0^-1(-n). At time t ∈ℕ the robot updates the weight of the vertex visited to t. So if the robot visits vertex v at time t then w_t(v) = t and w_t(v') = w_t-1(v') ∀ v' ∈ V, v'v, t ∈ℕ. If all vertices then have positive weight, i.e. min_y ∈ V(w_t(y)) > 0 then the algorithm terminates and we output ℛ𝒞(G,w_0) = t; the number of steps taken to clean all vertices. Otherwise at time t+1 the robot moves to vertex argmin { w_t(u): (u,v) ∈ E } i.e. the dirtiest neighbour of v at time t, and the process continues. As proved in <cit.>, this algorithm will always terminate after a finite number of steps.Using Ω_n to denote the set of (n!) initial weightings we define rc(G) = min_w_0 ∈Ω_n(ℛ𝒞(G,w_0)) and RC(G) = max_w_0 ∈Ω_n(ℛ𝒞(G,w_0)) the minimum and maximum number of steps needed to clean all vertices of G.Now supposing w_0 is a uniformly chosen element of Ω_n we define the average number of steps needed to clean all vertices of G; rc(G) = 𝔼(ℛ𝒞(G,w_0)).§ COMPLETE K-PARTITE GRAPHS§.§ Results Given some constants c_1 ≥ c_2 ... , ≥ c_k, ∑_i=1^k c_i = 1, k ≥ 3 consider the robot crawler model performed on the complete k-partite graph G_n induced by vertex sets V_1, V_2, ..., V_k where |V_i| = c_in ∀ 1 ≤ i ≤ k. * For c_1 ≤1/2,rc(G_n) = n* For c_1 > 1/2,rc(G_n) = 2nc_1 -1 * For c_2 ≤1/2(1-c_1),RC(G_n) = n+c_1n-1* For c_2 > 1/2(1-c_1),RC(G_n) = 2(n-c_2n)* For c_1 < 1/2,rc(G_n) = n+O(1)* For c_1 = 1/2,rc(G_n) = n + O(n^1/2)* For c_1 > 1/2,rc(G_n) = 2nc_1 + O(1)In particular we note that for c_1 ≠1/2, rc(G_n) = rc(G_n)+O(1), which refines Theorem 6 in <cit.> if we take G_n = K_n/k^k, the complete k-partite graph induced by k vertex sets each of size n/k. §.§ Proofs We begin with the more straightforward proofs of theorems 1 and 2. It is straight forward to construct a Hamiltonian path to verify part (i). For part (ii) we note that once the crawler is in set V_1 (which takes at least 1 step) it must return at least c_1n - 1 times. Whenever the crawler is in set V_1 it will take at least 2 steps of the algorithm before the crawler returns since there are of course no edges between vertices in V_1. Hence rc(G_n) ≥ 1 +2(nc_1 -1). Noting that |V_1| > |V ∖ V_1|, the bound can be achieved if the crawler starts in V_1 and oscillates between V_1 and V ∖ V_1, e.g. if w_0(v) < w_0(u) ∀ v ∈ V_1, u ∈ V ∖ V_1. For 1 ≤ i ≤ k define the surplus of vertex set i (=:S_w_0(i)) to be the number of uncleaned vertices remaining in V_i at the moment all vertices in V ∖ V_i have been cleaned. Clearly S_w_0(i) = 0 for all but one value of i. Further define S_w_0 = ∑_i=1^k S_w_0(i) = max_1 ≤ i ≤ k(S_w_0(i)). A crucial observation is that ℛ𝒞(G_n, w_0) = n + S_w_0 - 1. Indeed suppose S_w_0(i) > 0, then immediately after the time step (t = n-S_w_0(i)) when all vertices in V ∖ V_i have been cleaned the crawler will alternate between V_i and V ∖ V_i until all remaining S_w_0(i) uncleaned vertices of V_i have been cleaned which will take a further 2S_w_0(i) - 1 steps. Clearly S_w_0≤max_1 ≤ i ≤ k |V_i| = c_1n. Part (i) now amounts to showing that ifc_2 ≤1/2(1-c_1) then ∃ w_0 such that S_w_0 = c_1n. This follows in part since if k ≥ 4 it is possible to clean V ∖ V_1 in |V ∖ V_1| steps using Theorem 1 (i) on the complete (k-1)-partite graph induced by vertex sets V_2, ..., V_k, in which case S_w_0(1) = c_1n. Finally, if k = 3 then necessarily c_2 = c_3 and again it is of course possible to clean V ∖ V_1 in |V ∖ V_1| steps simply by alternating between V_2 and V_3 for the first 2c_2 n steps.Suppose now c_2 > 1/2(1-c_1) and S_w_0(2) = 0. When V_2 is fully cleaned there are uncleaned vertices elsewhere in V. We first note that it takes at least 2nc_2 - 1 steps to clean all vertices of V_2 at which point there are at most n-2nc_2+1 vertices in V not yet visited by the crawler. From this point it will take at most 2(n-2nc_2+1)-1 steps to clean the remainder of the vertices, which gives the required upper boundRC(G_n) ≤ 2(n-2nc_2+1) -1 + 2nc_2 - 1 = 2n(1-c_2).Consider w_0 ∈Ω_n with set V_2 being the |V_2| dirtiest, and V_1 the |V_1| cleanest vertices of V. That is ⋃_j = 0^c_2n-1 w_0^-1(-n+j) = V_2 and ⋃_j = 1^c_1n w_0^-1(-j) = V_1, then the bound is attained.We now turn our attention to Theorem 3, the main result of the section. Let m_i = max(x: ∃ y ≥ 0 s.t. y+x of the 2y+x cleanest vertices lie in set V_i). That is, m_i = max(x: ∃ y ≥ 0 s.t. ⋃_j = 1^2y+xw_0^-1(-j) ∩ V_i =y+x). Stochastically m_i is the record of an n step simple random walk, conditioned to be at a fixed position at time n. This random walk starts at the origin at time 0 and jumps up (down) by 1 at time t if w_0^-1(-t) ∈ V_i (w_0^-1(-t) ∈ V ∖ V_i), and finishes at time n in position |V_i| - |V ∖ V_i| = 2c_i - n. More on this shortly. S(i) ≤ m_i. W.l.o.g. take i = 1. Consider x ∈ V_1 defined to be the (m_1+1)^st cleanest vertex in V_1, and suppose it is also the (m_1+1+t)^th cleanest vertex in V overall, (so w_0(x) = -(m_1+1+t)). So, there are m_1 vertices cleaner than v in V_1 and t cleaner than v in V ∖ V_1. Clearly t ≥ 1 by the definition of m_1. Let v be the first vertex cleaned of the (m_1+1+t) cleanest of V. If v=x then we are done. If vx then v ∈ V ∖ V_1 and v must have been cleaned immediately after some node u ∈ V_1 where w_0(u) = -(m_1+1+t+l) some l > 0 and ∪_i=1^l w_0^-1(-m_1-1-t-i) ⊂ V_1. By the definition of m_1, necessarily l < t, (and all other nodes must have already been cleaned by the crawler). It is clear how the crawler will then proceed, alternating between V_1 and V ∖ V_1 until x is cleaned at which point there will be t-l>0 uncleaned vertices in V ∖ V_1, and hence S(1) ≤ m_1.It is not difficult to construct a graph with some initial vertex weights such that S(1) < m_1. As a simple example, consider the complete 3-partite graph Ginduced by V_1, V_2, V_3 with |V_1| = 3, |V_2| = 3, |V_3| = 1 andV_1 consisting of the 3 cleanest vertices of G. In this case m_1 = 3 but S(1) ≤ 2.We now make the link between m_1 and the record of a simple random walk bridge, (noting the start and end points of this bridge can be different). For 0 ≤ t ≤ n define U(t) := |v ∈ V_1, w_0(v) ≥ -t|, the number of vertices in V_1 initially among the t cleanest of V, D(t) := |v ∈ V ∖ V_1, w_0(v) ≥ -t| = t - U(t) andX(t) := U(t)-D(t). Let (Z(t))_t ≥ 0 be a random walk on ℤ starting from Z(0) = 0 with p = ℙ(Z(t+1)-Z(t)=1) = c_1 and q = ℙ(Z(t+1)-Z(t) = -1) = 1-p ∀ t ≥ 0. Observe that (X(t))_0 ≤ t ≤ n∼ (Z(t)|Z(n)=|V_1|-|V ∖ V_1|)_0 ≤ t ≤ n, and hence X(t) is a random walk bridge starting at X(0) = 0 and ending at X(n) = |V_1|-|V ∖ V_1|. We could equally have defined m_1 = max_0 ≤ t ≤ n{ X(t)}.For c_i < 0.5, 𝔼(m_i) ≤2c_i/1-2c_i.Again, w.l.o.g. take i = 1. Let h_j = ℙ(max_t ≥ 0(Z(t)) ≥ j). As a simple consequence of the Markov property, for j ≥ 1:h_j= ℙ(Z(1) = 1)ℙ(max_t ≥ 0(Z(t)) ≥ j|Z(1) = 1) +ℙ(Z(1) = -1)ℙ(max_t ≥ 0(Z(t)) ≥ j|Z(1) = -1) = c_1 h_j-1 +(1-c_1)h_j+1 Using c_1 < 1/2 together with the initial condition h_0 = 1 we find that h_j = (c_1/1-c_1)^j ∀ j ≥ 0. Now ℙ(m_1 ≥ j)= ℙ(max_0 ≤ t ≤ n(X(t)) ≥ j) = ℙ(max_0 ≤ t ≤ n(Z(t)) ≥ j | Z(n)=|V_1|-|V ∖ V_1|) ≤ℙ(max_0 ≤ t ≤ n(Z(t)) ≥ j | Z(n) ≥ |V_1|-|V ∖ V_1|) ≤ℙ(max_0 ≤ t ≤ n(Z(t)) ≥ j )/ℙ(Z(n) ≥ |V_1|-|V ∖ V_1|)≤ 2ℙ(max_0 ≤ t ≤ n(Z(t)) ≥ j) The first inequality follows from a simple coupling argument. If we are given a realisation of (X(t))_1 ≤ t ≤ n, and some integer 0 ≤ C ≤ |V_1|, we can define the random path (Z_1(t))_1 ≤ t ≤ n by taking C of the down steps of X(t) chosen uniformly at random among all of the |V_1|C possibilities and flipping them to up steps. Clearly, Z_1(t) ≥ X(t) ∀ 1 ≤ t ≤ n, and hence max_0 ≤ t ≤ n(Z_1(t)) ≥max_0 ≤ t ≤ n(X(t)). If we initially let C ∼1/2(Z(n) - (|V_1|-|V ∖ V_1|)) | Z(n) ≥ |V_1| - |V ∖ V_1| then it is also clear thatZ_1(t) ∼ Z(t) | Z(n) ≥ |V_1|-|V ∖ V_1|. Concluding the argument 𝔼(m_1) ≤ 2∑_j = 1^∞ℙ(max_0 ≤ t ≤ n(Z(t)) ≥ j) = 2∑_j = 1^∞h_j = 2c_1/1-2c_1We can now conclude part (i) of Theorem 3. For c_1 < 0.5: rc(G_n,w_0) = 𝔼(n + S_w_0-1) ≤ n + 𝔼( ∑_i = 1^k m_i ) ≤ n + ∑_i = 1^k2c_i/1-2c_i = n+O(1) In proving Lemma 2, we linked m_1 to the maximum of a Random Walk Bridge X(t) with the property that X(0) > X(n), and eventually used the expected maximum level reached by a Random Walk with negative drift. Using a similar strategy to conclude part (ii) of Theorem 3 where c_1 > 0.5 wouldn't work since of course, the expected maximum reached by a Random Walk with positive drift is unbounded. To navigate this problem we will reverse time on the Random Walk Bridge.For c_i > 1/2, 𝔼(m_i) ≤2c_1 n -n + 2(1-c_i)/2c_i-1 For 1 ≤ t ≤ n define X(t) = X(n-t). X(t) is again a Random Walk Bridge, but with X(0) = 2c_1 n - n and X(n) = 0. The key point here is that ( X(t)|c_1 = α) ∼( 2c_1n - n + X(t)|c_1 = 1-α), so 𝔼(m_i) = 𝔼( max_0 ≤ t ≤ n{ X(t)}) =𝔼( max_0 ≤ t ≤ n{X(t)})≤2c_1 n -n + 2(1-c_i)/2c_i-1 by Lemma 2. We have now shown that for c_1 > 0.5, rc(G_n) ≤ n +𝔼(m_i) ≤2c_1 n + 2(1-c_i)/2c_i-1 which completes the proof of part (iii) of Theorem 3.Finally, Godreche et. al. <cit.> prove that for c_1 = 0.5, 𝔼(max_0 ≤ t ≤ n(X(t))) = √(π n/8). Part (ii) of Theorem 3 follows.§ ERDOS-RENYI RANDOM GRAPH We now turn our attention to open problem 2 in <cit.>. In their paper Bonato et. al. considered the robot crawler performed on G(n,p) with np ≥√(nlogn). We will prove the 2 results in Theorem 4 below which are similar to Corollary 2 and Theorem 8 in their work, but for much sparser graphs: Let p = f(n)logn/n for some non-decreasing function f > 28. Then* RC(G(n,p)) ≤ n^2+o(1) a.a.s. * ℛ𝒞(G(n,p),w_0)/(n + n/f(n))p⟶ 1asn →∞In particular we note that if f(n) →∞ as n →∞, however slowly, then ℛ𝒞(G(n,p),w_0)/np⟶ 1asn →∞.We will use Lemma 1(5) from <cit.> which states that for any graph G, RC(G) ≤ n(Δ + 1)^d where Δ is the maximum degree of a vertex in G, and d is the diameter of G.The number of neighbours of v, a typical vertex of G(n,p), is distributed Bin(n-1,p). Hence, ℙ(v has ≥ 2npneighbours)= ℙ(Bin(n-1,p) ≥ 2np) = (1+o(1))ℙ(𝒩((n-1)p,(n-1)p(1-p)) ≥ 2np) ≤ (1+o(1))Φ(-np/√((n-1)p(1-p))) ≤ (1+o(1))Φ(-√(np)) ≤ (1+o(1)) e^-np/2/√(2 π np)≤ (1+o(1)) n^-f(n)/2≤ (1+o(1)) n^-14 Hence by the union bound, ℙ(Δ≥ 2np) ≤ (1+o(1)) n^-13In a 2004 paper <cit.>, (which extends the work of Bollobas <cit.>), Chung and Lu showed that a.a.s., d = (1+o(1))logn/log(np) for np →∞. Putting these bounds together, a.a.s; n(Δ + 1)^d ≤ n(2np)^(1+o(1))logn/lognp= n exp((1+o(1))logn/lognplog2np) = n^2+o(1) = o(n^3) To prove part (ii), we will have use for the following lemma: Let Y = ∑_i = 1^n/7 X_i where X_i ∼ Geom(1-(1-p)^i) independently for each 1 ≤ i ≤ n/7. For all ε > 0,ℙ((1-ε)(n/7 +n/f(n)) < Y < (1+ε)(n/7 +n/f(n))) n →∞⟶ 1 To prove the upper bound we will use the following stochastic domination:For Z_1 ∼ Geom(q) and Z_2 ∼ Exp(-log(1-q)), Z_1 ≼ 1 + Z_2.Defining E_i ∼ Exp(i) for 1 ≤ i ≤n/7, Y ≼1/-log(1-p)∑_i = 1^n/7 E_i + n/7 Hence, ℙ(Y > (1+ε)(n/7 +n/f(n)))≤ℙ(1/-log(1-p)∑_i = 1^n/7 E_i > (1+ε)n/f(n) + ε n/7) ≤ℙ(1/-log(1-p)∑_i = 1^n/7 E_i > (1+ε)n/f(n)) ≤ℙ( ∑_i = 1^n/7 E_i > (1+ε) logn)since -log(1-p) ≥ p = f(n)logn/n. Given that ∑_i = 1^n/7 E_i ∼max_1 ≤ i ≤ n/7{ E_1^i} where E_1^i∼ Exp(1) i.i.d., we apply the union bound to deduce ℙ(Y > (1+ε)(n/7 +n/f(n))) ≤ n e^-(1 + ε)logn = n^-εn →∞⟶ 0 In proving the lower bound, we will use an even simpler stochastic domination:For T_1^i∼ Geom(1-(1-p)^i) and T_2^i∼ Geom(ip) with i ≥ 1, ip < 1, T_1^i≽ T_2^i. This follows from the simple inequality 1-(1-p)^i≤ ip which holds ∀ i ≥ 1. Let T =∑_i = 1^n/f(n)logn T_2^i. We find ℙ(Y < (1-ε)(n/7 +n/f(n)))≤ℙ(T + ∑_i = 1+n/f(n)logn^n/71 < (1-ε)(n/7 +n/f(n))) ≤ℙ(T < n/f(n) -ε n/7 + n/f(n)logn) We recognise the relation between T and the coupon collector problem, (see for example <cit.>). Now, 𝔼(T) = ∑_i = 1^n/f(n)logn1/ip = logn - log(f(n)logn) + O(1)/p= n/f(n) - log(f(n)logn) - O(1)/f(n)lognVar(T ) = ∑_i = 1^n/f(n)logn1-ip/(ip)^2≤1/p^2∑_i = 1^∞1/(i)^2 = π^2/6p^2 We use Chebyshev's inequality to conclude ℙ(Y < (1-ε)(n/7 +n/f(n))) ≤ℙ(T < n/f(n) -ε n/7 + n/f(n)logn) ≤ℙ( |T - 𝔼( T )| > 𝔼( T ) - n/f(n) +ε n/7 - n/f(n)logn) ≤ℙ( |T - 𝔼( T )| > ε n/7 - n(log(f(n)logn) + O(1))/f(n)logn) ≤ℙ( |T - 𝔼( T )| > ε n/14)(for large enough n) ≤(ε n/14)^-2Var(T ) = 196 π ^2/6(ε f(n)logn)^2n →∞⟶ 0 We will prove Theorem 4 (ii) by showing high probability lower/upper bounds on ℛ𝒞(G(n,p),w_0). This is achieved by showing that with high probability this robot crawler number dominates/is dominated by a particular sum of geometrics. We will then use Lemma 3 to reach the final conclusion.Fix the order of the vertices of G(n,p) by initial weighting before we realise the edges of the random graph. So w.l.o.g. w_0(v_i) = -i ∀ 1 ≤ i ≤ n. Lower BoundWe begin by showing ℙ(ℛ𝒞(G(n,p),w_0) ≤ (1-ε)(n+f(n)/n)) n →∞⟶ 0,the lower bound.The crawler begins at time 1 at vertex v_n, initially the dirtiest node. Suppose that the crawler is positioned at vertex v, and that there are i vertices yet to be visited. * If this is the crawler's first visit to v, no information is known about the presence of potential edges between v and yet unvisited vertices, hence the probability that v is connected to an unvisited vertex is 1 - (1-p)^i independently of all previous steps of the algorithm. Otherwise, suppose that w was the vertex visited immediately after the crawler was last at vertex v.* If w had already been cleaned, then necessarily, it is cleaner than any yet unvisited vertex which implies there are no edges between v and yet uncleaned vertices.* If w had not already been cleaned, there are no edges between v and any uncleaned vertices which are dirtier than w, but presence of edges between v and uncleaned vertices cleaner than w is independent of all previous steps of the algorithm. In any case, the probability v is connected to an unvisited vertex is 1 - (1-p)^j for some 0 ≤ j ≤ i. Hence, independently of all previous steps of the process, the probability v is connected to an unvisited vertex is less than 1 - (1-p)^i.This implies the number of steps needed before reaching the next yet uncleaned vertex dominates a Geom(1 - (1-p)^i) random variable, and ∀ε > 0ℙ(ℛ𝒞(G(n,p),w_0) ≤ (1-ε)(n+f(n)/n)) ≤ℙ(∑_i = 1^n-1 Geom(1-(1-p)^i) ≤ (1-ε)(n+f(n)/n)) ≤ℙ(Y ≤ (1-ε)(n/7+f(n)/n)) n →∞⟶ 0 by Lemma 3.Upper BoundIt remains to show that ∀ε > 0 ℙ(ℛ𝒞(G(n,p),w_0) ≥ (1+ε)(n+f(n)/n)) n →∞⟶ 0. As in <cit.> we will consider different stages of the crawling process. Phase 1: Again, the process will start from v_n, initially the dirtiest node and proceed to clean vertices of the graph. This phase ends when either of the following occur: * 4n/7 vertices have been cleaned.* The crawler is not adjacent to any of the n/7 dirtiest (and as yet uncleaned) vertices, which are necessarily contained in { v_i, i < 5n/7}.We define the jump number J(v_i), (1 ≤ i ≤ n) of a vertex v_i as the number of times any cleaner node was visited before it was first cleaned itself. Intuitively it is the number of potential edges connected to v_i which were explored before one was first found, since each occurrence of a cleaner vertex being chosen by the crawler before vertex v_i implies a missing edge between the crawlers position at that time and v_i.If Phase 1 ends due to (a), and also the condition “J(v) ≤ n/7 for all vertices” at the end of Phase 1 holds we will say that property P1 holds.During each step of the crawling process in Phase 1, potential edges between the crawler and dirty nodes are not yet exposed. At each step, event (b) occurs if n/7 unexplored edges are not present in G(n,p). This occurs with probability at most (1-p)^n/7 = (1-f(n)logn/n)^n/7≤ n^-f(n)/7 = o(n^-3) Hence by the union bound, with probability 1-o(n^-2) Phase 1 ends due to (a). Further, as argued above “J(v_i) ≥ n/7” implies that the first n/7 unexplored potential edges to v_i were not present. Again this has probability at most (1-p)^n/7 = o(n^-3), and hence by another application of the union bound, property P1 holds with probability 1-o(n^-2).An important point to note is that only edges between vertices in { v_i, i < 5n/7} have been explored. Crucially for Phase 3, property P1 implies that each vertex cleaned in this phase has had at most 2n/7 potential edges exposed by the crawler.Phase 2: We continue to clean vertices until any one of the following occurs: * The crawler is not adjacent to any as yet uncleaned vertex. * There are n/7 uncleaned vertices remaining in G(n,p). If Phase 2 ends due to (b), and all vertices in { v_i, i < 5n/7} have been cleaned by the end of the phase, then we say property P2 holds.As in Phase 1, Phase 2 ends due to (a) at each step if (at least) n/7 unexplored edges are not present in G(n,p). Again we can conclude using the union bound that Phase 2 ends due to (b) with probability 1-o(n^-2).Suppose now that ∃ v ∈{ v_i, i < 5n/7} such that v has not been cleaned by the crawler by the end of Phase 2. This would imply that J(v) ≥n/7 which as previously calculated has probability o(n^-3).Using this observation we again use the union bound to deduce: ℙ({P2 holds}|{Phase 2 ends due to (b)}∩{P1 holds}) =1-o(n^-2) Hence, summarising what has been done so far, ℙ({P1 holds}∩{P2 holds}) = 1-o(n^-2)Phase 3: During this phase the crawler will continue to visit yet uncleaned vertices of G(n,p) as well as revisiting some of the vertices which were cleaned during Phase 1. These vertices will have the smallest weight at this stage. This phase ends when any of the following occur:* The crawler is not adjacent to any yet uncleaned vertex nor to any vertex which was cleaned during Phase 1 and has not yet been revisited in Phase 3.* The phase takes longer than 2n/7 steps.* All vertices are cleaned. If Phase 3 ends due to (c) then we say property P3 holds. In the explanation that follows, we condition on the event that P1 and P2 hold.At each step of this phase, in total there are at least 3n/7 “target” vertices which are yet to be visited at all or were cleaned in Phase 1 and have yet to be revisited in this phase. The reason for this is there are 4n/7 vertices cleaned in Phase 1, n/7 vertices yet to be visited at all and this phase takes at most 2n/7 steps. If the crawler has just revisited a vertex cleaned in Phase 1, P1 implies at most 2n/7 potential edges adjacent to the vertex will have been explored earlier in the process, so at least 3n/7 - 2n/7 = n/7 potential edges to “target” vertices are still unexplored. Otherwise, if the crawler has just visited a vertex for the first time in the process then all (≥3n/7) potential edges to “target” vertices are unexplored. This is because crucially: no edges between { v_i, i < 5n/7} and { v_i, i ≥5n/7} are explored in Phase 1; P2 implies the uncleaned vertices at the beginning of Phase 3 are contained within { v_i, i ≥5n/7} and as in earlier phases, the presence of potential edges between any possible current location of the crawler and yet unvisited vertices is still undetermined, and independent of previous steps of the process. Once again, the union bound tells us the probability we have (at least) n/7 unexplored edges not present in G(n,p) during one of these steps, and hence that Phase 3 ends due to (a), is o(n^-2).We now argue that with probability o(n^-2) Phase 3 ends due to (b). This is essentially a repeat of the argument in Phase 2. If Phase 3 ends due to (b) then ≥n/7 vertices cleaned in Phase 1 will have been revisited during Phase 3. If v ∈{ v_i, i ≥5n/7} is still uncleaned at the end of the phase, then J(v) ≥n/7, since all vertices cleaned in Phase 1 will be cleaner than v before it is itself cleaned. Once again, this has probability o(n^-3) and applying the union bound:ℙ( P3 holds} |{Phase 3 ends due to (b) or (c)}∩{P2 holds}∩{P1 holds}) =1-o(n^-2)We can now conclude that:ℙ({P3 holds} | {P1 holds}∩{P2 holds}) = 1-o(n^-2) and hence bringing together earlier calculations ℙ({P1, P2, P3 hold} ) = 1-o(n^-2) If Y := (Y| Y ≤ 2n/7) then conditional on P1, P2 and P3, Phases 1 and 2 will take n - n/7 steps and Phase 3 will take a number of steps distributed as Y. Indeed, during Phase 3 when there are x yet uncleaned vertices in { v_i, i ≥5n/7}, (and hence x unexplored edges from the crawlers current position and these vertices), the probability the crawler will be adjacent to at least one of them is given by 1-(1-p)^x. If the crawler continues to visit vertices with unexplored edges to all x yet uncleaned vertices then the probability the crawler will reach one of these x vertices in the next y steps is given by ℙ(Geom(1-(1-p)^x) ≤ y). And so ℙ(ℛ𝒞(G(n,p),w_0) ≥ (1+ε)(n+f(n)/n)) ≤ℙ(ℛ𝒞(G(n,p),w_0) ≥ (1+ε)(n+f(n)/n) | {P1, P2, P3 hold})+ ℙ({P1, P2, P3 hold}^C) ≤ℙ(Y + 6n/7≥ (1+ε)(n+f(n)/n)) + o(n^-2) ≤ℙ(Y≥ (1+ε)(n/7+f(n)/n)) + o(n^-2)n →∞⟶ 0again, by Lemma 3. plain
http://arxiv.org/abs/1702.08371v1
{ "authors": [ "Angus Davidson", "Ayalvadi Ganesh" ], "categories": [ "math.PR", "math.CO" ], "primary_category": "math.PR", "published": "20170227164852", "title": "The Robot Crawler Model on Complete k-Partite and Erdős-Rényi Random Graphs" }
Tensor Balancing on Statistical ManifoldMahito Sugiyama National Institute of Informatics JST, PRESTOHiroyuki Nakahara RIKEN Brain Science InstituteKoji Tsuda The University of Tokyo RIKEN AIP; NIMS December 30, 2023 ===================================================================================================================================================================================================================== Many machine learning models involve solving optimization problems. Thus, it is important to deal with a large-scale optimization problem in big data applications. Recently, subsampled Newton methods have emerged to attract much attention due to their efficiency at each iteration, rectified a weakness in the ordinary Newton method of suffering a high cost in each iteration while commanding a high convergence rate. Other efficient stochastic second order methods are also proposed. However, the convergence properties of these methods are still not well understood. There are also several important gaps between the current convergence theory and the performance in real applications. In this paper, we aim to fill these gaps. We propose a unifying framework to analyze both local and global convergence properties of second order methods. Based on this framework, we present our theoretical results which match the performance in real applications well.§ INTRODUCTION Mathematical optimization is an important pillar of machine learning. We consider the following optimization problem:min_x∈^d F(x) = 1/n∑_i=1^nf_i(x),where the f_i(x) are smooth functions.Many machine learning models can be expressed as (<ref>) where each f_i is the loss with respect to (w.r.t.) the i-th training sample. There are many examples such as logistic regressions, smoothed support vector machines, neural networks, and graphical models. Many optimization algorithms to solve the problem in (<ref>) are based on the following iteration:x^(t+1) = x^(t) - s_t Q_t g(x^(t)), t=0, 1, 2, …,where s_t>0 is the step length. If Q_t is the identity matrix and g(x^(t)) = ∇ F(x^(t)), the resulting procedure is called Gradient Descent (GD) which achieves sublinear convergence for a general smooth convex objective function and linear convergence for a smooth-strongly convex objective function. When n is large, the full gradient method is inefficient due to its iteration cost scaling linearly in n. Consequently, stochastic gradient descent (SGD) has been a typical alternative <cit.>. In order to achieve cheaper cost in each iteration, such a method constructs an approximate gradient on a small mini-batch of data. However, the convergence rate can be significantly slower than that of the full gradient methods <cit.>. Thus, a great deal of efforts have been made to devise modification to achieve the convergence rate of the full gradient while keeping low iteration cost <cit.>.If Q_t is a d× d positive definite matrix of containing the curvature information, this formulation leads us to second-order methods. It is well known that second order methods enjoy superior convergence rate in both theory and practice in contrast to first-order methods which only make use of the gradient information. The standard Newton method, where Q_t = [∇^2 F(x^(t))]^-1,g(x^(t)) = ∇ F(x^(t)) and s_t = 1,achieves aquadraticconvergence rate for smooth-strongly convex objective functions. However, the Newton method takes (nd^2+d^3) cost per iteration, so it becomes extremely expensive when n or d is very large. As a result, one tries to construct an approximation of the Hessian in which way the update is computationally feasible,while keeping sufficient second order information. One class of such methods are quasi-Newton methods, which are generalizations of the secant methods to find the root of the first derivative for multidimensional problems. The celebrated Broyden-Fletcher-Goldfarb-Shanno (BFGS) and its limited memory version (L-BFGS) are the most popular and widely used <cit.>. They take (nd+d^2) cost per iteration. Recently,when n≫ d, a class of called subsampled Newton methods have been proposed, which define an approximate Hessian matrix with a small subset of samples.The most naive approach is to sample a subset of functions f_i randomly <cit.> to construct a subsampled Hessian. <cit.> proposedaregularized subsampled Newton method called NewSamp. When the Hessian can be written as ∇^2F(x) = [B(x)]^T B(x) where B(x) is an available n× d matrix, <cit.> used sketching techniques to approximate the Hessian and proposed sketch Newton method. Similarly, <cit.> proposed to sample rows of B(x) with non-uniform probability distribution. <cit.> brought up an algorithm called LiSSA to approximate the inverse of Hessian directly.Although the convergence performance of stochastic second order methods has been analyzed, the convergence properties are still not well understood. There are several important gaps lying between the convergence theory and the performance of these algorithms in real applications. First, it is aboutthe necessity of Lipschitz continuity of the Hessian. In previous work, to achieve a linear-quadratic convergence rate, stochastic second order methods all assume that ∇^2F(x) is Lipschitz continuous. However, in real application without this assumption, they might also converge to an optimal point. For example, <cit.> used NewSamp tosuccessfully train the smoothed-SVMin which the ℓ_2-hinge loss is used, so the corresponding Hessian is not Lipschitz continuous.Second, it involves the sketched size of sketch Newton methods. To obtain a linear convergence, the sketched size is (dκ^2) in <cit.> and then be improved to (dκ) in <cit.> using Gaussian sketching matrices, where κ is the condition number of the Hessian matrix in question. However, the sketch Newton empirically performs well even when the Hessian matrix is ill-conditioned. Sketched size being several tens of times, or even several times of d can achieve a linear convergence rate in unconstrained optimization. But the theoretical result of <cit.>implies that sketched size may be beyond n in ill-condition cases. Third,it talks about the sample size in regularized subsampled Newton methods.In both <cit.> and <cit.>, their theoretical analysis shows that the sample size of regularized subsampled Newton methods should be set as the same as the conventional subsampled Newton method. In practice, however, adding a large regularizer can obviously reduce the sample size while keeping convergence.Thus, this does not agree withthe extant theoretical analysis <cit.>. In this paper, we aim to fill these gaps between the current theory and empirical performance. More specifically, we firstcast these second order methods into an algorithmicframework that we call approximate Newton. Accordingly,we propose a general result for analysis of both local and global convergence properties of second order methods.Based on this framework, we then give detailed theoretical analysis which matches the empirical performance. We summarize our contribution as follows: * We propose a unifying framework (Theorem <ref> and Theorem <ref>) to analyze local and global convergence properties of second order methods including stochasticand deterministic versions. The convergence performance of second order methods can be analyzed easily and systematically in this framework. * We prove that the Lipschitz continuity condition of Hessian is not necessary for achieving linear and superlinear convergence in variants of subsampled Newton. But it is needed to obtain quadratic convergence. This explains the phenomenon that NewSamp <cit.> can be used to train the smoothed SVM in which the Lipschitz continuity condition of Hessian is not satisfied. It also reveals the reason why previous stochastic second order methods, such as subsampled Newton, sketch Newton, LiSSA, etc., all achieve a linear-quadratic convergence rate. * We prove that the sketched size is independent of the condition number of Hessian matrix which explains that sketched Newton performs well even when Hessian matrix is ill-conditioned. * Based on our analysis framework, we provide a much tighter bound of sample size of subsampled Newton methods. To the best knowledge of authors, it is the tightest bound of subsampled Newton methods. * We provide a theoretical guarantee that adding a regularizer is an effective way to reduce sample size in subsampled Newton methods while keeping converging. Our theoretical analysis also shows that adding a regularizer will lead to poor convergence behavior as the sample size decreases.§.§ Organization The remainder of the paper is organized as follows. In Section <ref> we present notation and preliminaries.In Section <ref> we present a unifying framework for local and global convergence analysis of second order methods. In Section <ref> we analyze theconvergence properties of sketch Newton methods and prove that sketched size is independent of the condition number of Hessian matrix. In Section <ref> we give the convergence behaviors of several variants of subsampled Newton method. Especially, we reveal the relationship among sample size, regularizer and convergence rate. In Section <ref>, we validate our theoretical results experimentally. Finally, we conclude our work in Section <ref>. Theorems are proved in appendices in the order of their appearing.§ NOTATION AND PRELIMINARIES Section <ref> defines the notation used in this paper. Section <ref> introduces matrices sketching techniques and their properties. Section <ref> describes some important assumptions about objective functions. §.§ Notation Given a matrix A=[a_ij] ∈^m × n of rank ℓ and a positive integer k≤ℓ,its condensed SVD is given as A=UΣ V^T=U_kΣ_k V_k^T+U_∖ kΣ_∖ k V_∖k^T, where U_k and U_∖k contain the left singular vectors of A,V_k and V_∖k contain the right singular vectors of A, and Σ=(σ_1, …, σ_ℓ) with σ_1≥σ_2 ≥⋯≥σ_ℓ>0 are the nonzero singular values of A. We will use σ_max(A) to denote the largest singular value and σ_min(A) to denote the smallest non-zero singular value. Thus, the condition number of A is defined by κ(A) ≜σ_max(A)/σ_min(A). If A is positive semidefinite, then U = V and the square root of A can be defined as A^1/2 = UΣ^1/2U^T. It also holds that λ_i(A) = σ_i(A), where λ_i(A) is the i-th largest eigenvalue of A, λ_max(A) = σ_max(A), and λ_min(A) = σ_min(A). Additionally, A_F≜ (∑_i,ja_ij^2)^1/2=(∑_iσ_i^2)^1/2 is the Frobenius norm of A and A≜σ_1 is the spectral norm. Given a positive definite matrix M, x_M ≜M^1/2x is called the M-norm of x. Give square matrices A and B with the same size, we denote A ≼ B if B-A is positive semidefinite.§.§ Randomized sketching matrices We first give an ϵ_0-subspace embedding property which will be used to sketch Hessian matrices. Then we list some useful types of randomized sketching matrices includingGaussian projection <cit.>,leverage score sampling <cit.>, count sketch <cit.>. S∈^ℓ× n is said to be an ϵ_0-subspace embedding matrix w.r.t. a fixed matrix A∈^n× d where d<n, ifSA x^2=(1±ϵ_0)Ax^2 (i.e., (1-ϵ_0) Ax^2 ≤SA x^2≤ (1+ϵ_0) Ax^2) for all x ∈^d. From the definition of the ϵ_0-subspace embedding matrix, we can derive the following property directly.S∈^ℓ× n is an ϵ_0-subspace embedding matrix w.r.t. the matrix A∈^n× d if and only if (1-ϵ_0)A^TA≼ A^TS^TSA≼(1+ϵ_0)A^TA.Gaussian sketching matrix. The most classical sketching matrix is the Gaussian sketching matrix S∈^ℓ× n, whose extries are i.i.d. fromthe normal of mean 0 and variance 1/ℓ.Owing to thewell-known concentration properties <cit.>,Gaussian random matrices are very attractive. Besides, ℓ = (d/ϵ_0^2) is enough to guarantee the ϵ_0-subspace embedding property for any fixed matrix A∈^n× d. Moreover, ℓ = (d/ϵ_0^2) is the tightest bound among known types of sketching matrices. However, the Gaussian random matrix is usually dense, so it is costly to compute SA.Leverage score sketching matrix. A leverage score sketching matrix S = D Ω∈^ℓ× n w.r.t. A∈^n× d is defined by sampling probabilities p_i, a sampling matrix Ω∈^n×ℓ and a diagonalrescaling matrix D ∈^ℓ×ℓ. Specifically, we construct S as follows.For every j = 1,…,ℓ, independently and with replacement, pick an index i from the set {1,2…,n} with probabilityp_i, and set Ω_ji = 1 and Ω_j k=0 for k ≠ i as well as D_jj=1/√(p_iℓ).The sampling probabilities p_i are the leverage scores of Adefined as follows. Let V∈^n× dbe the column orthonormal basis of A, and let v_i,* denote the i-th row of V. Thenq_i≜v_i,*^2/d for i=1, …, n are the leverage scores of A. To achieve an ϵ_0-subspace embedding property w.r.t. A, ℓ = (dlog d/ϵ_0^2) is sufficient.Sparse embedding matrix. A sparse embedding matrix S∈^ℓ× n is such a matrix in each column of which there is only one nonzero entry uniformly sampled from {1,-1} <cit.>. Hence, it is very efficient to compute SA, especially when A issparse. To achieve an ϵ_0-subspace embedding property w.r.t. A∈^n× d, ℓ = (d^2/ϵ_0^2) is sufficient <cit.>.Other sketching matrices such as Subsampled Randomized Hadamard Transformation <cit.> as well as their properties can be found in the survey <cit.>.§.§ Assumptions and NotionsIn this paper, we focus on the problem described in Eqn. (<ref>). Moreover, we will make the following two assumptions. Assumption 1 The objective functionF is μ-strongly convex, that is, F(y) ≥ F(x) + [∇ F(x)]^T(y-x) + μ/2y-x^2,μ>0. Assumption 2 ∇ F(x) is L-Lipschitz continuous, that is, ∇ F(x) - ∇ F(y)≤ Ly-x, L>0. Assumptions 1 and 2 imply that for any x ∈^d, we have μ I≼∇^2F(x) ≼ L I,where I is the identity matrix of appropriate size. With a little confusion, we define κ≜L/μ.Note thatκ is an upper bound of the condition number of the Hessian matrix ∇^2F(x) for any x.Furthermore, if ∇^2 F(x) is Lipschitz continuous, then we have∇^2F(x) - ∇^2F(y)≤L̂x-y,where L̂>0 is the Lipschitz constant of ∇^2F(x).Throughout this paper, we use notions of linear convergence rate, superlinear convergence rate and quadratic convergence rate. In our paper, the convergence rates we will use are defined w.r.t. ·_M, where M = ∇^2F(x^*) and x^* is the optimal solution to Problem (<ref>). A sequence of vectors {x^(t)} is said to converge linearly to a limit point x^*, if for some 0< ρ<1,lim sup_t→∞x^(t+1) - x^*_M/x^(t) - x^* _M = ρ.Similarly, superlinear convergence and quadratic convergence are respectively defined aslim sup_t →∞x^(t+1) - x^*_M/x^(t) - x^* _M = 0, lim sup_t →∞x^(t+1) - x^*_M/x^(t) - x^* _M^2= ρ.We call it the linear-quadratic convergence rate if the following condition holds: x^(t+1) - x^*_M≤ρ_1x^(t) - x^*_M+ ρ_2 x^(t) - x^*_M^2,where 0<ρ_1<1. § MAIN RESULTS The existing variants of stochastic second order methods share some important attributes. First, these methods such as NewSamp <cit.>, LiSSA <cit.>, subsampled Newton with conjugate gradient <cit.>, and subsampled Newton with non-uniformly sampling <cit.>, all have the same convergence properties; that is, they have a linear-quadratic convergence rate. Second, they also enjoy the same algorithm procedure summarized as follows. In each iteration, they first construct an approximate Hessian matrix H^(t) such that(1-ϵ_0)H^(t)≼∇^2F(x^(t)) ≼(1+ϵ_0)H^(t),where 0≤ϵ_0<1.Then they solve the following optimization problemmin_p 1/2p^TH^(t)p -p^T∇ F(x^(t))approximately or exactly to obtain the direction vector p^(t). Finally, their update equation is given as x^(t+1) = x^(t) - p^(t). With this procedure, we regard these stochastic second order methods as approximate Newton methods. The detailed algorithmic description is listed in Algorithm <ref>. §.§ Local Convergence AnalysisIn the following theorem, we propose a unifying framework which describes the convergence properties of the second order optimization procedure depicted above. Let Assumptions 1 and 2 hold. Suppose that ∇^2 F(x) exists and is continuous in a neighborhood of a minimizer x^*. H^(t) is a positive definite matrix that satisfies Eqn. (<ref>) with 0≤ϵ_0<1. Let p^(t) be an approximate solution of Problem (<ref>) such that ∇ F(x^(t)) - H^(t)p^(t)≤ϵ_1/κ^3/2∇ F(x^(t)), where 0<ϵ_1<1. Then Algorithm <ref> has the following convergence properties. (a) There exists a sufficient small value γ and ν = o(1) such that when x^(t) - x^*_M≤γ, we have that x^(t+1) - x^*_M ≤(ϵ_0 + ϵ_1 + 2νμ^-1+2(2ν^1/2μ^-1/2 + νμ^-1)(νμ^-1+1))x^(t) - x^*_M. Moreover, ν will go to 0 as x^(t) goes to x^*. (b) Furthermore, if ∇^2F(x) is L̂-Lipschitz continuous, and x^(t) satisfies x^(t) - x^*_M ≤μ^3/2^-1, then it holds that x^(t+1) - x^*_M ≤(ϵ_0+ϵ_1) x^(t) - x^*_M+ 7 μ^-3/4^1/2x^(t) - x^*_M^3/2. In Eqn. (<ref>), the high order term is linear to x^(t) - x^*_M^3/2 instead of x^(t) - x^*_M^2 in previous work <cit.>. However, this difference can be neglected. If {x^(t)} converges with rate x^(t+1) - x^*_M ≤( x^(t) - x^*_M^3/2), then it takes (log_3/2log(1/ϵ)) iterations to achieve an ϵ-suboptimality. In contrast, if {x^(t)} converges with rate x^(t+1) - x^*_M ≤( x^(t) - x^*_M^2), then it takes(log_2log(1/ϵ)) iterations. Since it holds thatlog_3/2log1/ϵ = log_3/22 ·log_2log(1/ϵ),,log_3/22 < 2, we will also call the sequence {x^(t)} satisfying x^(t+1) - x^*_M ≤( x^(t) - x^*_M^3/2) converges quadratically . Similarly, we will refer Eqn. (<ref>) as the linear-quadratic convergence.From Theorem <ref>, we can find some important insights. First, Theorem <ref> provides sufficient conditions to get different convergence rates including linear, and super-liner rates.If (ϵ_0 + ϵ_1) is a constant, then sequence {x^(t)} converges linearly because ν = o(1) and it will go to 0 as t goes to infinity. Furthermore, if we set ϵ_0 = ϵ_0(t) and ϵ_1 = ϵ_1(t) such that ϵ_0(t) and ϵ_1(t) decrease to 0 as t increases, then sequence {x^(t)} will converge super-linearly. Second, Theorem <ref> makes it clearthat the Lipschitz continuity of the Hessian is not necessary for linear and super-linear convergenceof stochastic second order methods including Subsampled Newton method, Sketch Newton, NewSamp, etc.This reveals the reason why NewSamp can be used to train the smoothed SVM where the Lipschitz continuity of theHessian matrixis not satisfied.The Lipschitz continuity condition is only needed to get a quadratic convergence or linear-quadratic convergence.This explains the phenomena that LiSSA<cit.>, NewSamp <cit.>, subsampled Newton with non-uniformly sampling <cit.>, Sketched Newton <cit.> have linear-quadratic convergence rate because they all assume that the Hessian is Lipschitz continuous.In fact, itis well known that the Lipschitz continuity condition of ∇^2F(x) is not necessary to achieve a linear or superlinear convergence rate for inexact Newton methods. Third, the unifying framework of Theorem <ref> contains not only stochastic second order methods, but also the deterministic versions. For example, letting H^(t) = ∇^2F(x^(t)) and using conjugate gradient to get p^(t), we obtain the famous “Newton-CG” method. In fact, different choice of H^(t) and different way to calculate p^(t) lead us to different second order methods. §.§ Global Convergence Analysis In the previous analysis, the theory is local and approximate Newton can achieve a fast convergence rate once the iterations enter a suitable basin of the origin.In this section, we are going to obtain global convergence results for self-concordant functions.The self-concordant assumption is widely studied in the global convergence analysis of Newton methods <cit.>. Note that a closed, convex function F: ^d → is called self-concordant if:d/dα∇^2F(x + α v)|_α = 0≼ 2v_x∇^2 F(x)for all x in the domain of F(x) andand v ∈^d, where v_x= (v^T ∇^2F(x)v)^1/2 is the local norm.To achieve a global convergence, approximate Newton method should combine with the line search. At the damped phase where [∇ F(x^(t))]^T p^(t) is large, line search is applied to guarantee the convergence ofapproximate Newton method. Once [∇ F(x^(t))]^T p^(t) is sufficient small, then step size s = 1 can keep approximate Newton converging with a linear rate. The detailed algorithmic description of approximate Newton with backtracking line search is listed in Algorithm <ref>.In the following theorem, we provide the iteration complexity ofAlgorithm <ref> to achieve an ϵ-suboptimality. Assuming the objective function F(x) is self-concordant, H^(t) is a positive definite matrix satisfying Eqn. (<ref>) with 0≤ϵ_0<1. Let p^(t) be a descent directionsatisfying Eqn. (<ref>). The total complexity of approximate Newton method with backtracking line search (Algorithm <ref>) to achieve an ϵ-suboptimality is at most T = F(x^(0)) - F(x^*)/η + 2/1-ϵ_0- 2ϵ_1κ^-1log(1-ϵ_0 - 2ϵ_1κ^-1/12ϵ), where η is defined as η = αβ(1-ϵ_0)ρ^2(1-ϵ_0 - 2ϵ_1κ^-1)^2/144 + 12 ρ√((1-ϵ_0)) (1-ϵ_0 - 2ϵ_1κ^-1), ρ =(1-ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2)^1/2/(1+ϵ_0)^1/2(1+ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2). In the above theorem, the iteration complexity of approximate Newton with line search still depends the condition number of the objective function even it is self-concordant. This dependence on the condition number is caused by the approximation to H^-1∇ F(x). If ϵ_1 = 0 in Eqn. (<ref>), then we can obtain that η = αβ(1-ϵ_0)^3/144(1+ϵ_0) + 12 (1+ϵ_0)^1/2(1-ϵ_0)^3/2 which is independent of the condition number. Thus, the total complexity is independent of the independent of the condition number. § SKETCH NEWTON METHODIn this section, we use Theorem <ref> to analyzethe convergence properties of Sketch Newton which utilizes the sketching technique to approximate the Hessian. We mainly focus on the case that the Hessian matrix is of the form∇^2 F(x) = B(x)^TB(x)where B(x) is an explicitly available n× d matrix. Our result can be easily extended to the case that ∇^2 F(x) = B(x)^TB(x) + Q(x), where Q(x) is a positive semi-definite matrix related to the Hessian of regularizer. The Sketch Newton method constructs the approximate Hessian matrix as follows:H^(t) =[S^(t)B(x)]^T S^(t)B(x)where S^(t)∈^ℓ× n is a randomized sketching matrix.Approximate Newton method with such Hessian approximation is referred assketch Newton method.The detailed algorithmic description is listed in Algorithm <ref>. Let F(x) satisfy the conditions described in Theorem <ref>. Assume the Hessian matrix is given as Eqn. (<ref>). Let 0<δ<1, 0<ϵ_0<1/2 and 0≤ϵ_1 <1 be given. S∈^ℓ× n is an ϵ_0-subspace embedding matrix w.r.t. B(x) with probability at least 1-δ. Then sketch Newton (Algorithm <ref>)has the following convergence properties: * There exists a sufficient small value γ and ν = o(1) such that when x^(t) - x^*_M≤γ,each iteration satisfiesEqn. (<ref>) with probability at least 1-δ. * If ∇^2F(x^(t)) is also Lipschitz continuous and {x^(t)} satisfies Eqn. (<ref>), then each iteration satisfies Eqn. (<ref>) with probability at least 1-δ. * If F(x) is furthermore self-concordant, the iteration complexity of the sketch Newton with backtracking line search (Algorithm <ref> with H^(t) constructed as Eqn. (<ref>)) is upper bounded by Eqn. (<ref>). Theorem <ref> directly provides a bound of the sketched size. Using the leverage score sketching matrix as an example, the sketched size ℓ = (dlog d/ϵ_0^2) is sufficient. We compare our theoretical bound of the sketched size with the ones of <cit.> and <cit.> in Table <ref>. As we can see, our sketched size is much smaller than the other two, especially when the Hessian matrix is ill-conditioned. Theorem <ref> shows that the sketched size ℓ is independent on the condition number of the Hessian matrix ∇^2F(x) just as shown in Table <ref>. This explains the phenomena that when the Hessian matrix is ill-conditioned, Sketch Newton performs well even when the sketched size is only several times of d.Furthermore, the iteration complexity of the sketch Newton with backtracking line search shares the similar result to the one of <cit.>. Especially when ϵ_1 = 0, Eqn. (<ref>) reduces to T = F(x^(0)) - F(x^*)/η + 4log(1/24ϵ),η = αβ(1-ϵ_0)^3/12((1+ϵ_0) +(1+ϵ_0)^1/2(1-ϵ_0)^3/2). We can observe that T is independent of the condition number of the objective function.A similar result can be found in Theorem 2 of <cit.>.Theorem <ref> also contains the possibility of achieving an asymptotically super-linear rate by using an iteration-dependent sketching accuracy ϵ_0 = ϵ_0(t). In particular, we present the following corollary.F(x) satisfies the the properties described in Theorem <ref>. Consider the approximate Hessian H^(t) constructed as Eqn. (<ref>) with the iteration-dependent sketching accuracy is given as ϵ_0(t) = 1/log(1+t) and p^(t) = [H^(t)]^-1∇ F(x). If the initial point x^(0) is close enough to the optimal point x^*, then sequence {x^(t)} of the sketch Newton (Algorithm <ref> with H^(t) constructed as Eqn. (<ref>)) converges superlinearly. § THE SUBSAMPLED NEWTON METHOD AND VARIANTS In this section, we apply Theorem <ref> to analyzesubsampled Newton methods.Instead of the Hessian can be presented as Eqn. (<ref>), for subsample Newton methods, we assume that the Hessian be the sum of different Hessian's:∇^2 F(x) = 1/n∑_i=1^n∇^2 f_i(x), ∇^2 f_i(x) ∈^d× d.We make the assumption that each f_i(x) and F(x) have the following properties:max_1 ≤ i≤ n∇^2f_i(x)≤ K < ∞, λ_min(∇^2F(x))≥μ>0.Accordingly, we can define a new kind of condition number κ̂ = K/μ.§.§ The Subsampled Newton method The Subsampled Newton method is depicted in Algorithm <ref> and the approximate Hessian is constructed by sampling:H^(t) = 1/||∑_j∈𝒮∇^2 f_j(x^(t)).We now give its local convergence properties in the following theorem. Let F(x) satisfy the properties described in Theorem <ref>.Assume Eqn. (<ref>) and Eqn. (<ref>) hold and let 0<δ<1, 0<ϵ_0<1/2 and 0≤ϵ_1<1 be given. The sample size || satisfies || ≥3K/μlog(2d/δ)/ϵ_0^2. The approximate Hessian H^(t) is constructed as Eqn. (<ref>), and the direction vector p^(t) satisfies Eqn. (<ref>). Then for t = 1,…, T, Algorithm <ref> has the following convergence properties: * There exists a sufficient small value γ and ν = o(1) such that when x^(t) - x^*_M≤γ,each iteration satisfiesEqn. (<ref>) with probability at least 1-δ. * If ∇^2F(x^(t)) is also Lipschitz continuous and {x^(t)} satisfies Eqn. (<ref>), then each iteration satisfies Eqn. (<ref>) with probability at least 1-δ. * If F(x) is furthermore self-concordant, the iteration complexity of the sketch Newton with backtracking line search (Algorithm <ref> with H^(t) constructed as Eqn. (<ref>)) is upper bounded by Eqn. (<ref>). As we can see, Algorithm <ref> almost has the same convergence properties as Algorithm <ref> except several minor differences. The main difference isthe construction manner ofH^(t) which should satisfy Eqn. (<ref>). Algorithm <ref> relies on the assumption that each ∇^2f_i(x) is upper bounded (i.e., Eqn. (<ref>) holds), while Algorithm <ref> is built on the setting of the Hessian matrix as in Eqn. (<ref>). §.§ Regularized Subsampled NewtonIn ill-conditioned cases (i.e., κ̂ = K/μ is large), the subsampled Newton methodin Algorithm <ref> should take a lot of samplesbecause the sample size |𝒮|depends on K/μ linearly.To overcome this problem, one resorts to a regularized subsampled Newton method whichadds a regularizer to the original subsampled Hessian:H^(t) = 1/||∑_j∈𝒮∇^2 f_j(x^(t)) + ξ· Iwhere ξ > 0 is the regularization parameter.The detailed algorithmic procedure of the regularized subsampled Newton is described in Algorithm <ref>.In the following analysis, weprove that adding a regularizer is an effective way to reduce the sample size while keeping converging in theory.Let F(x) satisfy the properties described in Theorem <ref>. Assume Eqn. (<ref>) and (<ref>) hold, and let 0<δ<1, 0≤ϵ_1<1 and 0<ξ be given.Assume the sample size || satisfy || ≥18Klog(2d/δ)/ξ,and H^(t) is constructed as in Algorithm <ref>. Define ϵ_0 = max(3ξ + μ/3ξ + 3μ ,L - 2ξ/2(L+ξ)), which implies that 0<ϵ_0<1. Moreover, the direction vector p^(t) satisfies Eqn. (<ref>). Then Algorithm <ref> has the following convergence properties: * There exists a sufficient small value γ and ν = o(1) such that when x^(t) - x^*_M≤γ,each iteration satisfiesEqn. (<ref>) with probability at least 1-δ. * If ∇^2F(x^(t)) is also Lipschitz continuous and {x^(t)} satisfies Eqn. (<ref>), then each iteration satisfies Eqn. (<ref>) with probability at least 1-δ. * If F(x) is furthermore self-concordant, the iteration complexity of the sketch Newton with backtracking line search (Algorithm <ref> with H^(t) constructed as Eqn. (<ref>)) is upper bounded by Eqn. (<ref>). In Theorem <ref> the parameter ϵ_0mainly decidesconvergence properties of Algorithm <ref>. It is determined by two terms just as shown in Eqn. (<ref>). These two terms depict the relationship among the sample size, regularizer ξ· I, and convergence rate. We can observe that the sample size || = 18Klog(2d/δ)/ξ decreases as ξ increases.Hence Theorem <ref> gives a theoretical guarantee that adding the regularizer ξ· I is an effective approach for reducing the sample size when K/μ is large.Conversely, if we want to sample a small part of f_i's, then we should choose a large ξ. Though a large ξ can reduce the sample size, it is at the expense of a slower convergence rate.As we can see, 3ξ + μ/3ξ + 3μ goes to 1 as ξ increases.At the same time, ϵ_1 also has to decrease. Otherwise, ϵ_0+ϵ_1 may be beyond 1 which means that Algorithm <ref> will not converge. In fact, a slower convergence rate in regularized subsampled Newton method is because the sample size becomes small, which implies less curvature information is obtained.However, a small sample size implieslow computational cost in each iteration. Therefore, a proper regularizer which balances the cost of each iteration and convergence rate is the key in the regularized subsampled Newton algorithm. §.§ NewSamp <cit.> proposed NewSamp which is another regularized subsampled Newton method.NewSamp constructs its approximate Hessian as follows:H^(t) = H_||^(t) + U_∖r(λ̂_r+1^(t)I-Λ̂_∖r)U_∖r^T ,where H_||^(t) = 1/||∑_j∈𝒮∇^2 f_j(x^(t)),and its SVD decomposition is H_||^(t) = UΛ̂U^T = U_r Λ̂_r U^T_r + U_∖rΛ̂_∖rU_∖r^T.The detailed algorithm is depicted in Algorithm <ref>. Now, we give the theoretical analysis of local convergence properties of NewSamp (Algorithm <ref>).Let F(x) satisfy the properties described in Theorem <ref>.Assume Eqn. (<ref>) and Eqn. (<ref>) hold and let 0<δ<1 and target rank r be given.Let λ_r+1 be the (r+1)-th eigenvalue of ∇^2F(x^(t)). Set the sample size || ≥18Klog(2d/δ)/λ_r+1, and define ϵ_0 = max( 5λ_r+1 + μ/5λ_r+1 + 3μ,1/2),which implies 0<ϵ_0<1. Assume the direction vector p^(t) satisfies Eqn. (<ref>). Then for t = 1,…, T, Algorithm <ref> has the following convergence properties: * There exists a sufficient small value γ and ν = o(1) such that when x^(t) - x^*_M≤γ,each iteration satisfiesEqn. (<ref>) with probability at least 1-δ. * If ∇^2F(x^(t)) is also Lipschitz continuous and {x^(t)} satisfies Eqn. (<ref>), then each iteration satisfies Eqn. (<ref>) with probability at least 1-δ. * If F(x) is furthermore self-concordant, the iteration complexity of the sketch Newton with backtracking line search (Algorithm <ref> with H^(t) constructed as Eqn. (<ref>)) is upper bounded by Eqn. (<ref>). The first term of right hand of Eqn. (<ref>) reveals the the relationship between the target rank r and sample size. We can observe the sample size is linear to 1/λ_r+1.Hence, a small r means that a small sample size is sufficient.Conversely, if we want to sample a small portion of f_i's, then we should choose a small r.Eqn. (<ref>) shows that a small sample size will lead to a poor convergence rate.If we set r = 0, then ϵ_0 will be 1-2μ/5λ_1+3μ. Consequently, the convergence rate of NewSamp is almost the same as gradient descent. It is worth pointing out that Theorem <ref> explains the empirical results that NewSamp is applicable in training SVM in which the Lipschitz continuity condition of ∇^2F(x) is not satisfied <cit.>.§.§ Comparison with Previous Work We will compare our results in this section with previous work.Though many variants of subsampled Newton methods have been proposed recently,they share the similar proof procedure.Thus, these algorithms have almost the same sample size and convergence rate. For example, the subsampled Newton method <cit.> and NewSamp <cit.> have the same order of sample size and convergence rate (referring to Table <ref>).Thus, we only compare our results with the recent work of <cit.> and NewSamp <cit.>. The detailed comparison is listed in Table <ref>. First, compare our analysis of subsampled Newton with the one of <cit.>. We can observe that to achieve the same convergence rate, our result only needs (K/μ) in contrast to (K^2/μ^2) of <cit.>. Hence, our result is substantially much tighter than previous work.Then we compare our theoretical analysis of NewSamp with the one of <cit.>. We can observe that thoughNewSamp is a kind of regularized subsampled Newton, it still takes (K^2/μ^2) samples which is the same to subsampled Newton. In contrast, our analysis (Theorem <ref>) describes how to the regularization reduces the sample size and convergence speed. This theory matches the empirical study that a small r (implying a large λ_r+1) will reduce the samples and convergence speed <cit.>.Finally, we compare NewSamp with regularized subsampled Newton (Algorithm <ref>). We mainly focus on the parameter ϵ_0 in Theorem <ref> and Theorem <ref> which mainly determines convergence properties of Algorithm <ref> and Algorithm <ref>.Specifically, if we set ξ = λ_r+1 in Eqn. (<ref>), then ϵ_0 = 3λ_r+1 + μ/3λ_r+1 + 3μ which is of the same order of the first term of the right hand of Eqn. (<ref>). Hence, we can regard NewSamp as a special case of Algorithm <ref>. However, NewSamp provides an approachfor automatic choice of α.Recall that NewSamp includes another parameter: the target rank r. Thus, NewSamp and Algorithm <ref> have the same number of free parameters. If r is not properly chosen, NewSamp will still have poor performance. Therefore, Algorithm <ref> is theoretically preferred because NewSamp needs extra cost to perform SVDs. § EMPIRICAL ANALYSISIn this section, wevalidate our theoretical results about unnecessity of the Lipschitz continuity condition of ∇^2F(x), sketched size of the sketch Newton and how the regularization affects the sample size and convergence rate of regularized Newton, experimentally.. §.§ Unnecessity of Lipschitz continuity of Hessian We conduct experiment on the primal problem for the linear SVM which can be written asmin_xF(x) = 1/2x^2 + C/2n∑_i=1^nℓ(b_i,⟨ x, a_i⟩)where (a_i,b_i) denotes the training data, x defines the separating hyperplane, C>0, and ℓ(·) is the loss function. In our experiment, we choose Hinge-2 loss as our loss function whose definition is ℓ(b,⟨ x, a ⟩) = max(0, 1-b⟨ x, a ⟩)^2. Let SV^(t) denote the set of indices of all the support vectors at iteration t, i.e.,SV^(t) = {i: b_i⟨ x^(t), a_i⟩ <1 }.Then the Hessian matrix of F(x^(t)) can be written as ∇^2F(x^(t)) = I + 1/n∑_i∈ SV^(t) a_ia_i^T. From the above equation, we can see that ∇^2F(x) is not Lipschitz continuous.Without loss of generality, we use the Subsampled Newton method (Algorithm <ref>) in our experiment. We sample 5% support vectors in each iteration. Our experiments on three datasets whose detailed description is in Table <ref> and report our results in Figure <ref>. From Figure <ref>, we can see that Subsampled Newton converges linearly and the Newton method converges superlinearly. This matches our theory that the Lipschitz continuity of ∇^2F(x) is not necessary to achieve a linear or superlinear convergence rate. §.§ Sketched Size of Sketch Newton Now we validate that our theoretical result that sketched size is independent of the condition number of the Hessian matrix in Sketch Newton. To control the condition number of the Hessian conveniently, we conduct the experiment on least squares regression which is defined asmin_x1/2Ax - b^2.In each iteration, the Hessian matrix is A^TA. In our experiment, A is a 10000× 54 matrix. And we set the singular values σ_i of A as:σ_i = 1.2^-i.Then the condition number of A is κ(A) = 1.2^54 = 1.8741× 10^4. We use different sketch matrices in Sketch Newton (Algorithm <ref>) and set different values of the sketched size ℓ. We report our empirical results in Figure <ref>.From Figure <ref>, we can see that Sketch Newton performs well when the sketch size ℓ is several times of d for all different sketching matrices. Moreover, the corresponding algorithms converge linearly.This matches our theory that sketched size is independent of the condition number of Hessian matrix to achieve a linear convergence rate. In contrast, the theoretical resultof <cit.> shows that sketched size is ℓ = d * κ(A) = 1.02× 10^6bigger than n= 10^4. §.§ Sample Size of Regularized Subsampled Newton We also choose least squares regression defined in Eqn. (<ref>) in our experiment to validate the theory that adding a regularizer is an effective approach to reducing the sample size while keeping convergence in Subsampled Newton. Let A∈^n× d where n = 8000 and d = 5000. Hence Sketch Newton can not be used in this case because n and d are close to each other. In our experiment, we set different sample sizes ||. For each || we choose different regularizer terms α and different target ranks r. We report our results in Figures <ref> and <ref>.As we can see, if the sample size || is small, then we should choose a large α in Algorithm <ref>; otherwise, the algorithm will diverge. However, if the regularizer term α is too large, then the algorithm will converge slowly. Besides, increasing the sample size and choosing a proper regularizer will improve convergence properties obviously. When || = 600, it only needs about 1200 iterations to obtain a precise solution while it needs about 8000 iterations when || = 100. Similarly, if the sample size || is small, then we should choose a small target rank in NewSamp. Otherwise NewSamp may diverge. Also if the target rank is not chosen properly, then NewSamp will have poor convergence properties. Furthermore, comparing Figures <ref> and <ref>, we can see that the two algorithms have similar convergence properties. This validates the theoretical result that NewSamp provides a method to choose α automatically. Our empirical analysis matches the theoretical analysis in Subsection <ref> very well.§ CONCLUSION In this paper, we have proposed a framework to analyze both local and global convergence properties of second order methods including stochasticand deterministic versions. This framework reveals some important convergence properties of the subsampled Newton method and sketch Newton method,which are unknown before. The most important thing is in that our analysis lays the theoretical foundation of several important stochastic second order methods. We believe that this framework might also provide some useful insights for developing new subsampled Newton-type algorithms.We would like to address this issue in future. plainnat § SOME IMPORTANT LEMMAS In this section, we give several important lemmas which will be used in the proof of the theorems of this paper. If A and B are d× d symmetric positive matrices, and (1-ϵ_0)B ≼ A ≼ (1+ϵ_0) B where 0<ϵ_0<1, then we have A^1/2B^-1 A^1/2 - I≤ϵ_0, where I is the identity matrix. Because A ≼ (1+ϵ_0) B, we have z^T [A- (1+ϵ_0) B] z ≤ 0 for any nonzero z ∈^d. This implies z^T A z/z^T B z≤ 1+ ϵ_0 for any z ≠ 0. Subsequently, λ_max(B^-1A) = λ_max(B^-1/2AB^-1/2) = max_u ≠ 0u^T B^-1/2AB^-1/2u/u^T u = max_z ≠ 0z^T A z/z^T B z ≤1+ϵ_0, where the last equality is obtained by setting z = B^-1/2u. Similarly, we have λ_min(B^-1A) ≥ 1-ϵ_0. Since B^-1A and A^1/2B^-1 A^1/2 are similar, the eigenvalues of A^1/2B^-1 A^1/2 are all between 1-ϵ_0 and 1+ϵ_0. Therefore, we have A^1/2B^-1 A^1/2 - I≤ϵ_0. Let X_1,X_2,…, X_k be independent, random, symmetric, real matrices of size d× d with 0≼ X_i ≼ L I, where I is the d× d identity matrix. Let Y = ∑_i=1^kX_i, μ_min = λ_min([Y]) and μ_max = λ_max([Y]). Then, we have (λ_min(Y)≤ (1-ϵ)μ_min) ≤ d· e^-ϵ^2μ_min/2L, and (λ_max(Y)≥ (1+ϵ)μ_max) ≤ d· e^-ϵ^2μ_min/3L.§ PROOFS OF THEOREM <REF>The proof Theorem <ref> consists of the following lemmas.First, by Lemma <ref>, we upper boundx^(t+1) - x^*_M by three terms. The first term dominates the convergence property.The second term depicts how the approximate descent direction affects the convergence. The third term is a high order term. In Lemma <ref>, we prove that the first term of right hand of Eqn. (<ref>) is upper bounded by ϵ_0 x^(t) - x^* and a high order term. Lemma <ref> shows that the second term affect the convergence rate at most ϵ_1. In Lemma <ref>, we complete the convergence analysis when the Hessian is continuous near the optimal point but the Hessian is not Lipschitz continuous. If the the Hessian is not Lipschitz continuous, Lemma <ref> provides the detailed convergence analysis. Letting sequence {x^(t)} update as Algorithm <ref>, then it satisfies x^(t+1) - x^*_M ≤ I - M^1/2[H^(t)]^-1M^1/2·x^(t) - x^*_M + [H^(t)]^-1∇ F(x^(t)) - p^(t)_M+M^1/2[H^(t)]^-1(∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*) ) where M = ∇^2 F(x^*). By the update procedure of x^(t), we have x^(t+1) - x^* =x^(t) - x^* - p^(t) =x^(t) - x^* - [H^(t)]^-1∇ F(x^(t)) +[H^(t)]^-1∇ F(x^(t)) - p^(t) = x^(t) - x^* +[H^(t)]^-1∇ F(x^(t)) - p^(t) - [H^(t)]^-1(∇^2 F(x^*) (x^(t) - x^*) + ∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*)). Letting us denote M = ∇^2 F(x^*), and multiplying M^1/2 to the left and right hands of above equality, we can obtain that M^1/2 (x^(t+1) - x^*) =M^1/2 (x^(t) - x^*) - M^1/2[H^(t)]^-1M^1/2· M^1/2 (x^(t) - x^*)+M^1/2( [H^(t)]^-1∇ F(x^(t)) - p^(t))-M^1/2[H^(t)]^-1(∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*) ). Thus, we can obtain that x^(t+1) - x^*_M ≤ I - M^1/2[H^(t)]^-1M^1/2·x^(t) - x^*_M + [H^(t)]^-1∇ F(x^(t)) - p^(t)_M+M^1/2[H^(t)]^-1(∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*) ) Assume that the objective function F(x) satisfies Assumption 1 and 2. Let M denote ∇^2F(x^*), and the approximate Hessian H^(t) satisfy Condition (<ref>). Then ifΔ is sufficient small with Δ = ∇^2F(x^*) - ∇^2F(x^(t)), we have I- M^1/2[H^(t)]^-1M^1/2≤ϵ_0+μ^-1/2Δ^1/2(1+ϵ_0)(2+μ^-1/2Δ^1/2). If Δ is sufficient small (which implies that ∇^2F(x^*) and ∇^2F(x^(t)) are close enough), then we have λ_max([∇^2F(x^*)]^1/2[H^(t)]^-1[∇^2F(x^*)]^1/2) = 1+ϵ_0' λ_min([∇^2F(x^*)]^1/2[H^(t)]^-1[∇^2F(x^*)]^1/2) = 1 - ϵ_0” with 0< ϵ_0'<1,0<ϵ_0”<1. Now we consider the case I- [∇^2F(x^*)]^1/2[H^(t)]^-1[∇^2F(x^*)]^1/2 = ϵ_0' which implies ϵ_0'≥ϵ_0”. By the properties of eigenvalue and singular value of matrices, we have λ^2_max(M^1/2[H^(t)]^-1M^1/2) = λ^2_max([H^(t)]^-1M) ≤σ^2_1([H^(t)]^-1M)= λ_max(M[H^(t)]^-2M), where the inequality follows from the fact that the largest eigenvalue is no larger than the largest singular value. Thus, we obtain that ϵ_0' = λ_max(M^1/2[H^(t)]^-1M^1/2) ≤λ^1/2_max(M[H^(t)]^-2M) Since Eqn. (<ref>) holds, then we have I- M^1/2[H^(t)]^-1M^1/2≤I- (M[H^(t)]^-2M )^1/2 Next, we will prove that Eqn. (<ref>) still holds when ϵ_0'<ϵ_0” which will lead to I- [∇^2F(x^*)]^1/2[H^(t)]^-1[∇^2F(x^*)]^1/2 = ϵ_0”. By the properties of eigenvalue and singular value of matrices, we have λ^2_min(M^1/2[H^(t)]^-1M^1/2) = λ^2_min([H^(t)]^-1M) ≥σ^2_min([H^(t)]^-1M)= λ_min(M[H^(t)]^-2M), where the inequality follows from the fact that the smallest eigenvalue is no smaller than the smallest singular value. This implies that ϵ_0” = λ_min(M^1/2[H^(t)]^-1M^1/2) ≥λ^1/2_min(M[H^(t)]^-2M) which implies that Eqn. (<ref>) holds. Next, we will upper bound the value of right hand of Eqn. (<ref>). First, we consider the case that λ_max(M[H^(t)]^-2M )^1/2 - 1 ≥ 1 - λ_min(M[H^(t)]^-2M )^1/2, which implies that I- (M[H^(t)]^-2M )^1/2 = λ_max(M[H^(t)]^-2M )^1/2 - 1. Furthermore, we have λ_max(M[H^(t)]^-2M )^1/2 - 1= M[H^(t)]^-2M^1/2 - 1 = M[H^(t)]^-2M^1/2 - ^1/2 + ^1/2 - 1 ≤ ϵ_0+ M[H^(t)]^-2M^1/2 - ^1/2 where we denote = ∇^2 F(x^(t)) [H^(t)]^-2∇^2 F(x^(t)), and the last inequality follows the condition (<ref>). Moreover, we have M[H^(t)]^-2M^1/2 - ^1/2 =+ Δ[H^(t)]^-2∇^2 F(x^(t)) +[H^(t)]^-2∇^2 F(x^(t))Δ +Δ [H^(t)]^-2Δ^1/2 - ^1/2 ≤ ^1/2 + Δ[H^(t)]^-2∇^2 F(x^(t)) +[H^(t)]^-2∇^2 F(x^(t))Δ +Δ [H^(t)]^-2Δ^1/2 - ^1/2 ≤ 2Δ^1/2[H^(t)]^-1/2·[H^(t)]^-1∇^2 F(x^(t))^1/2 + [H^(t)]^-1·Δ. By Condition (<ref>), we can obtain that [H^(t)]^-1≤ (1+ϵ_0)[∇^2 F(x^(t))]^-1≤ (1+ϵ_0)μ^-1 and [H^(t)]^-1∇^2 F(x^(t)) = λ^1/2_max(∇^2 F(x^(t)) [H^(t)]^-2∇^2 F(x^(t)))≤ (1+ϵ_0). Thus, we can obtain that M[H^(t)]^-2M^1/2 - ^1/2≤μ^-1/2Δ^1/2(1+ϵ_0)(2+μ^-1/2Δ^1/2). Now we consider the case that Eqn. (<ref>) does not hold which implies that I- (M[H^(t)]^-2M )^1/2 = 1 - λ_min(M[H^(t)]^-2M )^1/2. Furthermore, we have 1 - λ_min(M[H^(t)]^-2M )^1/2 = 1 - λ^1/2_min() + λ^1/2_min() - λ^1/2_min(M[H^(t)]^-2M) ≤ ϵ_0 + λ^1/2_min() - λ^1/2_min(M[H^(t)]^-2M), where the last inequality follows from condition (<ref>). Since Δ is sufficient small, then we have that λ^1/2_min() - λ^1/2_min(M[H^(t)]^-2M) = λ^1/2_min() - λ^1/2_min( + Δ[H^(t)]^-2∇^2 F(x^(t)) +[H^(t)]^-2∇^2 F(x^(t))Δ +Δ [H^(t)]^-2Δ) ≤ λ^1/2_min() - λ^1/2_min () + Δ[H^(t)]^-2∇^2 F(x^(t)) +[H^(t)]^-2∇^2 F(x^(t))Δ +Δ [H^(t)]^-2Δ^1/2 ≤ μ^-1/2Δ^1/2(1+ϵ_0)(2+μ^-1/2Δ^1/2), wherethe first inequality is because of λ_min(A+B) = σ_min(A+B) ≥σ_min(A) - B and the fact that (a -b)^1/2≥ a^1/2 - b^1/2 if a≥ b and a,b≥ 0. Therefore, we can obtain that I- M^1/2[H^(t)]^-1M^1/2≤ϵ_0+μ^-1/2Δ^1/2(1+ϵ_0)(2+μ^-1/2Δ^1/2). Let p^(t) satisfy Condition (<ref>) and F(x) satisfy Assumption 1 and 2,then we have [H^(t)]^-1∇ F(x^(t)) - p^(t)_M≤ϵ_1 x^(t) - x^*_M. [H^(t)]^-1∇ F(x^(t)) - p^(t)_M = M^1/2 [H^(t)]^-1(∇ F(x^(t)) -H^(t) p^(t)) (<ref>)≤ ϵ_1(1+ϵ_0)^-1κ^-3/2M^1/2[H^(t)]^-1∇ F(x^(t)) (<ref>)≤ ϵ_1 κ^-3/2M^1/2[∇^2 F(x^(t))]^-1∇ F(x^(t)) ≤ ϵ_1κ^-1/2M^1/2x^(t) - x^* ≤ ϵ_1x^(t) - x^*_M, where the last two inequalities follow from the assumptions that F(x) is L-smooth and μ-strongly convex. There exists a sufficient small value γ, ν = o(1), such that when x^(t) - x^*_M≤γ, the sequence {x^(t)} of Algorithm <ref> satisfies x^(t+1) - x^*_M ≤(ϵ_0 + ϵ_1 + 2νμ^-1+2(2ν^1/2μ^-1/2 + νμ^-1)(νμ^-1+1))x^(t) - x^*_M. Because ∇^2 F(x) is continuous around x^*, then existing a sufficient small value γ such that if x^(t) - x^*_M ≤γ, then it holds that <cit.> ∇^2 F(x^*) - ∇^2 F(x^(t))≤ν, and ∇ F(x^(t)) - ∇ F(x^*) - ∇^2 F(x^*)(x^(t) - x^*)_M ≤νx^(t) - x^*_M. By Lemma <ref>, we have M^1/2[H^(t)]^-1M^1/2≤1+ϵ_0+μ^-1/2Δ^1/2(1+ϵ_0)(2+μ^-1/2Δ^1/2) ≤ 2+2μ^-1/2Δ^1/2(2+μ^-1/2Δ^1/2) (<ref>)≤ 2+2(2ν^1/2μ^-1/2 + νμ^-1). Combining with Lemma <ref>, <ref> and <ref>, we can obtain that x^(t+1) - x^*_M ≤ (ϵ_0 + ϵ_1 + 4ν^1/2μ^-1/2 + 2νμ^-1)x^(t) - x^*_M +M^1/2[H^(t)]^-1(∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*) )(<ref>)≤ (ϵ_0 + ϵ_1 + 4ν^1/2μ^-1/2 + 2νμ^-1)x^(t) - x^*_M + νM^-1M^1/2[H^(t)]^-1M^1/2x^(t) - x^*_M ≤ (ϵ_0 + ϵ_1 + 2νμ^-1+2(2ν^1/2μ^-1/2 + νμ^-1)(νμ^-1+1))x^(t) - x^*_M. From above equation, we can observe that if ϵ_0+ϵ_1 < 1 and ν is sufficiently small which can be guaranteed by choosing proper γ, then we have x^(t+1) - x^*_M ≤x^(t) - x^*_M ≤γ. Let the Hessian of F(x) be -Lipschitz continuous and the x^(t) satisfy x^(t) - x^*_M ≤μ^3/2^-1. Then the sequence {x^(t)} of Algorithm <ref> satisfies x^(t+1) - x^*_M ≤(ϵ_0+ϵ_1) x^(t) - x^*_M+ 7 μ^-3/4^1/2x^(t) - x^*_M^3/2. By Taylor's expansion at x^*, we have ∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*) = ∫_0^1∇^2 F(x^* + s(x^(t) - x^*)) - ∇^2 F(x^*)ds ·(x^(t) - x^*). Thus, we can obtain that M^1/2[H^(t)]^-1(∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*) ) = M^1/2[H^(t)]^-1M^1/2∫_0^1 M^-1/2(∇^2 F(x^* + s(x^(t) - x^*)) - ∇^2 F(x^*))M^-1/2ds · M^1/2(x^(t) - x^*) ≤ M^1/2[H^(t)]^-1M^1/2_T_1·∫_1^1(M^-1/2(∇^2 F(x^* + s(x^(t) - x^*))) M^-1/2 - I)ds_T_2·x^(t) - x^*_M . Next, we will bound the value of T_1 and T_2. By Lemma <ref>, we have M^1/2[H^(t)]^-1M^1/2≤ 2+2μ^-1/2Δ^1/2(2+μ^-1/2Δ^1/2). with Δ = ∇^2F(x^*) - ∇^2F(x^(t)). By the assumption that ∇^2F(x) is -Lipschitz continuous, then we have μ^-1/2Δ^1/2(2+ μ^-1/2Δ^1/2) ≤ μ^-1/2^1/2x^(t) - x^*^1/2(2 + μ^-1/2^1/2x^(t) - x^*^1/2) ≤ ^1/2μ^-3/4x^(t) - x^*_M^1/2(2 + μ^-3/4^1/2x^(t) - x^*_M^1/2)≤ 3 μ^-3/4^1/2x^(t) - x^*_M^1/2,≤3, where the last two inequalities follow from the condition x^(t) - x^*_M ≤μ^3/2^-1. Therefore, we can obtain that T_1 ≤ 2+2μ^-1/2Δ^1/2(2+μ^-1/2Δ^1/2) ≤ 8. Let us represent that∇^2 F(x^* + s(x^(t) - x^*)) = M + Δ', then we have T_2 = ∫_0^1(M^-1/2(M + Δ') M^-1/2 - I)ds = ∫_0^1(M^-1/2Δ'M^-1/2)ds ≤ M^-1∫_0^1Δ' ds ≤ μ^-1∫_0^1s(x^(t) - x^*) ds ≤ μ^-3/2/2x^(t) - x^*_M. Therefore, we have M^1/2[H^(t)]^-1(∇ F(x^(t)) - ∇^2 F(x^*) (x^(t) - x^*) )≤T_1 · T_2 x^(t) - x^*_M (<ref>)≤ 8 ·μ^-3/2/2x^(t) - x^*_M ≤4 μ^-3/2x^(t) - x^*_M. Combining with Lemma <ref>, <ref> and <ref>, we can obtain that x^(t+1) - x^*_M ≤ (ϵ_0+ϵ_1 +2μ^-1/2Δ^1/2(2+μ^-1/2Δ^1/2))x^(t) - x^*_M+4 μ^-3/2x^(t) - x^*_Mx^(t) - x^*_M^2 (<ref>)≤ (ϵ_0+ϵ_1) x^(t) - x^*_M + 3 μ^-3/4^1/2x^(t) - x^*_M^3/2+4 μ^-3/2x^(t) - x^*_M^2 ≤ (ϵ_0+ϵ_1) x^(t) - x^*_M+ 7 μ^-3/4^1/2x^(t) - x^*_M^3/2, where the last two inequality follows from the condition x^(t) - x^*_M ≤μ^3/2^-1. § PROOF OF THEOREM <REF> For a self-concordant function F(x), if two points x,y satisfy x-y_x <1, where v_x = [∇^2 F(x)]^-1/2 v, we have some useful inequalities: * Hessian bound: (1 - x-y_x)^2 ∇^2 F(y) ≼∇^2F(x) ≼1/(1 - x-y_x)^2∇^2 F(y) * Function value bound: ζ(y-x_x) ≤ F(y) - F(x) - ∇ F(x)^T (y-x) ≤ζ^*(y-x_x), where ζ(α) = α - log(1+α) and ζ^*(α) = -α - log(1 - α). This section, we will prove the convergence rate of damped approximate Newton method.First, we will show the case that V(x) is smaller than a threshold which is mainly determined by how well the Hessian is approximated.In this case, thestep size s = 1 will satisfy the exit condition of line search. Then, we will provide the convergenceanalysis when V(x) is larger than the threshold where the step size s should be chosen by the line search.Before proving the convergence analysis,we first define some new notation and clarify their relation. Let us denoteV(x^(t)) =[∇^2 F(x^(t))]^-1/2∇ F(x^(t)),(x^(t)) = [H^(t)]^-1/2∇ F(x^(t)),and (x^(t)) = (∇^T F(x^(t)) p^(t))^1/2. Let the approximate Hessian satisfy Eqn. (<ref>) andthe descent direction p^(t) satisfy Eqn. (<ref>). Then it holds that ^2(x^(t)) ≥(1 - ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) ·^2(x^(t)), and p^(t)^2_x^(t)≤ (1+ϵ_0) (1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2)^2 ·^2(x^(t)). First, we have ∇^T F(x^(t)) p^(t) = ∇^T F(x^(t)) [H^(t)]^-1∇ F(x^(t)) + ∇^T F(x^(t))[H^(t)]^-1( [H^(t)]p^(t) - ∇ F(x^(t))) (<ref>)≥ ∇^T F(x^(t)) [H^(t)]^-1∇ F(x^(t)) - (∇^T F(x^(t)) [H^(t)]^-1∇ F(x^(t)) )^1/2H^-1/2κ^-3/2ϵ_1∇ F(x^(t)) ≥ ∇^T F(x^(t)) [H^(t)]^-1∇ F(x^(t)) - κ^-3/2ϵ_1 ∇^T F(x^(t)) [H^(t)]^-1∇ F(x^(t)) H^-1/2H^1/2 (<ref>)≥ (1 - ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) ·^2(x^(t)). Similarly, we can obtain that ∇^T F(x^(t)) p^(t)≤(1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) ·^2(x^(t)). By the condition (<ref>), we can obtain that p^(t)_x^(t)^2 ≤ (1+ϵ_0) [p^(t)]^T [H^(t)] p^(t). Furthermore, we have [p^(t)]^T [H^(t)] p^(t) = [p^(t)]^T (∇ F(x^(t)) + [H^(t)] p^(t) - ∇ F(x^(t))) ≤ [p^(t)]^T ∇ F(x^(t)) + p^(t)[H^(t)] p^(t) - ∇ F(x^(t)) (<ref>)≤ [p^(t)]^T ∇ F(x^(t)) + ϵ_1κ^-3/2p^(t)∇ F(x^(t)). Furthermore, we have p^(t)≤ p^(t) - [H^(t)]^-1∇ F(x^(t)) + [H^(t)]^-1∇ F(x^(t)) (<ref>)≤ ϵ_1κ^-3/2[H^(t)]^-1∇ F(x^(t)) + [H^(t)]^-1/2[H^(t)]^-1/2∇ F(x^(t)) ≤ ( ϵ_1κ^-3/2[H^(t)]^-1[H^(t)]^1/2 + [H^(t)]^-1/2) [H^(t)]^-1/2∇ F(x^(t)) ≤ ( 1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) [H^(t)]^-1/2[H^(t)]^-1/2∇ F(x^(t)) and ∇ F(x^(t))≤[H^(t)]^1/2[H^(t)]^-1/2∇ F(x^(t)). Thus, we can obtain that p^(t)∇ F(x^(t))≤ ( 1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) [H^(t)]^-1/2[H^(t)]^1/2[H^(t)]^-1/2∇ F(x^(t))^2 ≤ κ^1/2(1+ϵ_0/1-ϵ_0)^1/2( 1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) ^2(x^(t)). Therefore, we can obtain that [p^(t)]^T [H^(t)] p^(t)≤ [p^(t)]^T ∇ F(x^(t)) + ϵ_1κ^-3/2p^(t)∇ F(x^(t)) ≤ [p^(t)]^T ∇ F(x^(t)) + ϵ_1κ^-1(1+ϵ_0/1-ϵ_0)^1/2( 1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) ^2(x^(t)) (<ref>)≤ (1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2) ^2 ^2(x^(t)). Combining Eqn. (<ref>), we can obtain p^(t)^2_x^(t)≤ (1+ϵ_0) (1 + ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2)^2 ·^2(x^(t)).Now, we begin to prove the case that V(x^(t)) ≤1-ϵ_0 - 2ϵ_1κ^-1/12 and the step size s = 1 is sufficient. Let the descent direction p^(t) satisfy Eqn. (<ref>) and V(x^(t)) satisfy V(x^(t)) ≤1-ϵ_0 - 2ϵ_1κ^-1/12. Then the approximate Newton with backtrack line search (Algorithm <ref>) has the following convergence property V(x^(t+1)) ≤1+ϵ_0+2ϵ_1κ^-1/2 V(x^(t)). Then we have V(x^(t+1))= [∇^2F(x^(t+1))]^-1/2∇ F(x^(t+1)) (<ref>)≤ 1/1 - p^(t)_x[∇^2F(x^(t))]^-1/2∇ F(x^(t+1)) By Taylor's expansion of ∇ F(x^(t+1)) at point x^(t), we have [∇^2F(x^(t))]^-1/2∇ F(x^(t+1)) =[∇^2F(x^(t))]^-1/2(∇ F(x^(t))+∇^2 F(x^(t))(-p^(t)) + ∫_0^1 [∇^2 F(x^(t) + sp^(t)) - ∇^2 F(x^(t))](-p^(t))ds) ≤ (I - [∇^2F(x^(t))]^1/2[H^(t)]^-1[∇^2F(x^(t))]^1/2)[∇^2F(x^(t))]^-1/2∇ F(x^(t))_T_1+[∇^2F(x^(t))]^1/2[H^(t)]^-1[∇^2F(x^(t))]^1/2·[∇^2F(x^(t))]^-1/2·∇ F(x^(t)) - H^(t) p^(t)_T_2+∫_0^1 ([∇^2F(x^(t))]^-1/2∇^2 F(x^(t) - sp^(t)) [∇^2F(x^(t))]^-1/2 - I)ds · [∇^2F(x^(t))]^1/2 p^(t)_T_3 We are going to bound the above terms. First, by the assumption (<ref>), we have I - [∇^2F(x^(t))]^1/2[H^(t)]^-1[∇^2F(x^(t))]^1/2≤ϵ_0. Combining the definition of V(x), we can obtain T_1 ≤ I - [∇^2F(x^(t))]^1/2[H^(t)]^-1[∇^2F(x^(t))]^1/2·[∇^2F(x^(t))]^-1/2∇ F(x^(t)) ≤ ϵ_0 V(x^(t)). Also by the condition (<ref>), we have [∇^2F(x^(t))]^1/2[H^(t)]^-1[∇^2F(x^(t))]^1/2≤ (1+ϵ_0). Combining the condition (<ref>) and the definition of V^(t), we can obtain that T_2 ≤ (1+ϵ_0) μ^-1/2ϵ_1/κ^3/2∇ F(x^(t))≤(1+ϵ_0)ϵ_1/κ V(x^(t)) ≤2ϵ_1/κ V(x^(t)). We also have T_3 ≤ ∫_0^1 ([∇^2F(x^(t))]^-1/2∇^2 F(x^(t) - sp^(t)) [∇^2F(x^(t))]^-1/2 - I)ds ·p^(t)_x (<ref>)≤ |∫_0^1(1/(1 - sp^(t)_x)^2 - 1) ds| ·p^(t)_x = p^(t)_x/1-p^(t)_x·p^(t)_x. Next, we will bound the value of p^(t)_x. We have p^(t)_x = [∇^2 F(x^(t))]^1/2 p^(t) = [∇^2 F(x^(t))]^1/2 [H^(t)]^-1∇ F(x^(t)) - [∇^2 F(x^(t))]^1/2 [H^(t)]^-1( ∇ F(x^(t)) - H^(t) p^(t)) ≤ [∇^2 F(x^(t))]^1/2 [H^(t)]^-1 [∇^2 F(x^(t))]^1/2· [∇^2 F(x^(t))]^-1/2∇ F(x^(t))+ [∇^2F(x^(t))]^1/2[H^(t)]^-1[∇^2F(x^(t))]^1/2·[∇^2F(x^(t))]^-1/2·∇ F(x^(t)) - H^(t) p^(t) = [∇^2 F(x^(t))]^1/2 [H^(t)]^-1 [∇^2 F(x^(t))]^1/2· [∇^2 F(x^(t))]^-1/2∇ F(x^(t)) + T_2 ≤ (1+ϵ_0) V(x^(t)) + 2ϵ_1/κ V(x^(t)). Combining above results, we can obtain that V(x^(t+1)) ≤ 1/1 - p^(t)_x (T_1 + T_2+T_3) ≤ (ϵ_0+2ϵ_1κ^-1) V(x^(t))/1 - (1+ϵ_0+2ϵ_1κ^-1) V(x^(t)) + (1+ϵ_0+2ϵ_1κ^-1)^2 V^2(x^(t))/(1-(1+ϵ_0+2ϵ_1κ^-1)V(x^(t)))^2 If V(x^(t)) satisfies that V(x^(t)) ≤ 1 - (ϵ_0+2ϵ_1κ^-1)^2/(1+ϵ_0+2ϵ_1κ^-1)^2(2+ϵ_0 +2ϵ_1κ^-1+ √((2+ϵ_0+2ϵ_1κ^-1)^2 - 1 +(ϵ_0+ 2ϵ_1κ^-1)^2)) ≤ 1-ϵ_0 - 2ϵ_1κ^-1/12, we have V(x^(t+1)) ≤1+ϵ_0+2ϵ_1κ^-1/2 V(x^(t)).Now we begin to analyze the phase that line search should be applied to find a step size s<1.This phase is commonly commonly referred as damped phase. Let the approximate Hessian satisfy Eqn. (<ref>) andthe descent direction p^(t) satisfy Eqn. (<ref>). If it holds that (x) ≥√((1-ϵ_0)) (1-ϵ_0 - 2ϵ_1κ^-1) /12, then Algorithm <ref> has the following convergence property F(x^(t+1)) ≤ F(x^(t)) -αβ·ρ^2 ^2(x^(t))/1+ρ(x^(t)), where ρ is defined as ρ =(1-φ)^1/2/(1+ϵ_0)^1/2 (1+φ), φ = ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2. By the update rule, we can obtain that F(x^(t+1)) (<ref>)≤F(x^(t)) - s ∇ F(x^(t))^Tp^(t) + ζ^*(s p^(t)_x^(t)) = F(x^(t)) - s ^2(x^(t)) - s p^(t)_x^(t) - log(1 - s p^(t)_x^(t))), with 0≤ s < 1/(x^(t)). Letting us define ŝ as ŝ = ^2(x^(t))/(^2(x^(t)) + p^(t)_x^(t)) p^(t)_x^(t). We can use this bound to show the backtracking line search always results in a step size s ≥βŝ. Furthermore, we can obtain that F(x^(t+1)) ≤ F(x^(t)) - ^2(x^(t))/p^(t)_x^(t) - log(p^(t)_x^(t)/^2(x^(t)) + p^(t)_x^(t)) = F(x^(t)) -^2(x^(t))/p^(t)_x^(t) + log(1 +^2(x^(t))/p^(t)_x^(t)) ≤ F(x^(t)) - (^2(x^(t))/p^(t)_x^(t)))^2/2(1+^2(x^(t))/p^(t)_x^(t)), = F(x^(t)) - 1/2·ŝ^2(x^(t)) ≤F(x^(t)) -α·ŝ^2(x^(t)) where the second inequality follows form the fact that it holdsfor a>0 that -a + log(1+a) + a^2/2(1+a)≤ 0. The last inequality is because α < 1/2. Since we obtain that F(x^(t+1)) ≤F(x^(t)) -α·ŝ^2(x^(t)), we show the exit condition of the line search has satisfied. Furthermore, the exit condition holds whenthe step size satisfies s ≥βŝ. Thus, we can obtain that F(x^(t+1)) ≤F(x^(t)) -αβ·ŝ^2(x^(t)). Next, we will bound the value of ŝ^2(x^(t)). By the definition of ŝ, we can obtain that ŝ^2(x^(t)) = (^2(x^(t))/p^(t)_x^(t))^2/(1+^2(x^(t))/p^(t)_x^(t)). By Lemma <ref>, we have (x^(t))/p^(t)_x^(t)≥(1-φ)^1/2(x^(t))/(1+ϵ_0)^1/2 (1+φ)(x^(t)) =(1-φ)^1/2/(1+ϵ_0)^1/2 (1+φ), where φ = ϵ_1κ^-1·(1+ϵ_0/1-ϵ_0)^1/2. Furthermore, we have ŝ^2(x^(t)) = (^2(x^(t))/p^(t)_x^(t))^2/(1+^2(x^(t))/p^(t)_x^(t)) ≥ ((1-φ)^1/2/(1+ϵ_0)^1/2 (1+φ)(x^(t)))^2/1+ (1-φ)^1/2/(1+ϵ_0)^1/2 (1+φ)(x^(t)) ≥ ((1-φ) /(1+ϵ_0)^1/2 (1+φ)(x^(t)))^2/1+ (1-φ) /(1+ϵ_0)^1/2 (1+φ)(x^(t)) where the last inequality follows from Lemma <ref>. Letting us denote ρ = (1-φ)^1/2/(1+ϵ_0)^1/2 (1+φ), then we have F(x^(t+1)) ≤ F(x^(t)) -αβ·ŝ^2(x^(t)) ≤ F(x^(t)) -αβ·ρ^2 ^2(x^(t))/1+ρ(x^(t)) By the Condition (<ref>), we have 1/1-ϵ_0^2(x^(t)) ≥ V^2(x^(t)). Thus, we can obtain that if (x) ≤(1-ϵ_0)^1/2 1-ϵ_0 - 2ϵ_1κ^-1/12, then it holds that V(x) ≤1-ϵ_0/12. Therefore, we can obtain that when (x) ≥(1-ϵ_0)^1/2 1-ϵ_0 - 2ϵ_1κ^-1/12, it holds that F(x^(t+1)) ≤ F(x^(t)) -αβ·ρ^2 ^2(x^(t))/1+ρ(x^(t)).Combining Lemma <ref> and <ref>, we can obtain the global convergence rate of approximate Newton with backtracking line search. of Theorem <ref> Let us denote η = αβ·ρ^2 (√((1-ϵ_0)) (1-ϵ_0 - 2ϵ_1κ^-1) /12)^2/1+ρ√((1-ϵ_0)) (1-ϵ_0 - 2ϵ_1κ^-1) /12 = αβ(1-ϵ_0)ρ^2(1-ϵ_0 - 2ϵ_1κ^-1)^2/144 + 12 ρ√((1-ϵ_0)) (1-ϵ_0 - 2ϵ_1κ^-1). By Lemma <ref>, we can obtain that it takes at most F(x^(0)) - F(x^*)/η steps in the damped phase because of F(x^(t+1)) - F(x^(t)) ≤ -η when (x) ≥√((1-ϵ_0)) (1-ϵ_0 - 2ϵ_1κ^-1) /12. If it holds that(x) ≤√((1-ϵ_0)) (1-ϵ_0 - 2ϵ_1κ^-1) /12, then we have V(x^(t)) ≤1-ϵ_0 - 2ϵ_1κ^-1/12. By Lemma <ref>, we have V(x^(t+k)) ≤(1+ϵ_0 + 2ϵ_1κ^-1/2)^k 1-ϵ_0-2ϵ_1κ^-1/12 Furthermore, the self-concordance of F(x) implies that F(x^(t+k)) - F(x^*) ≤ V(x^(t+k)) ≤(1+ϵ_0 + 2ϵ_1κ^-1/2)^k 1-ϵ_0 - 2ϵ_1κ^-1/12. To make the right hand of above equation less than ϵ, then it will take no more than k = 2/1-ϵ_0- 2ϵ_1κ^-1log(1-ϵ_0 - 2ϵ_1κ^-1/12ϵ) iterations. Therefore,the total complexity of approximate Newton method with backtracking line search to achieve an ϵ-suboptimality is at most F(x^(0)) - F(x^*)/η + 2/1-ϵ_0- 2ϵ_1κ^-1log(1-ϵ_0 - 2ϵ_1κ^-1/12ϵ).§ PROOFS OF SECTION <REF> of Theorem <ref> If S is an ϵ_0-subspace embedding matrix w.r.t. B(x^(t)), then we have(1-ϵ_0) ∇^2F(x^(t))≼ [B(x^(t))]^TS^TSB(x^(t)) ≼ (1+ϵ_0) ∇^2F(x^(t)).By simple transformation and omitting ϵ_0^2, Eqn. (<ref>) can be transformed into(1-ϵ_0) [B(x^(t))]^TS^TS∇^2B(x^(t)) ≼∇^2F(x^(t)) ≼ (1+ϵ_0) [B(x^(t))]^TS^TSB(x^(t)).The convergence rate can be derived directly from Theorem <ref> and <ref>. of Corollary <ref> If ∇^2F(x) is not Lipschitz continuous, then we havelim sup_t →∞x^(t+1) - x^*_M/x^(t) - x^* _M = lim sup_t →∞(ϵ_0(t)+ ν(t)κμ^-1(2μ^1/2 + 2κ^-1/2 +ν(t)))= lim sup_t →∞(1/log(1+t)+ ν(t)κμ^-1(2μ^1/2 + 2κ^-1/2 +ν(t))) = 0,where ν(t) → 0 is because ∇^2F(x^(t)) - ∇^2F(x^*)→ 0 as x^(t) approaches x^*.If ∇^2F(x) is Lipschitz continuous, then we havelim sup_t →∞x^(t+1) - x^*_M/x^(t) - x^* _M≤ lim sup_t →∞(ϵ_0(t) + 7 μ^-3/4^1/2x^(t) - x^*_M^1/2) = lim sup_t →∞(1/log(1+t) + 7 μ^-3/4^1/2x^(t) - x^*_M^1/2) =0. § PROOFS OF THEOREMS OF SECTION <REF> of Theorem <ref> Let us denote that X_i = [∇^2F(x^(t))]^-1/2∇^2f_i(x) [∇^2F(x^(t))]^-1/2, Y = ∑_i∈ X_i Because ∇^2f_i(x) is chosen uniformly, then we have [Y] = ∑_i∈[X_i] =I. Furthermore, by the Condition (<ref>) and (<ref>), we can obtain that X_i≤K/μλ_max([y]) = λ_min([y]) = ||. By Lemma <ref>, we have (λ_min(Y) ≤ (1-ϵ_0) ||) ≤ dexp(-ϵ_0^2||/2K/μ). Letting us choose || = 2K/μlog(d/δ)/ϵ_0^2, then it holds with probability at least 1-δ that λ_min(Y) ≥ 1-ϵ_0 which implies that min_x∈^dx^T[∇^2F(x^(t))]^-1/2(∑_i∈∇^2f_i(x)) [∇^2F(x^(t))]^-1/2 x/x^2≥ (1-ϵ_0) || ⇒ 1/||∑_i∈∇^2f_i(x) ≽ (1-ϵ_0) ∇^2F(x^(t)). By simple transformation and omitting ϵ_0^2, the above equation can be represented as ∇^2F(x^(t)) ≼ (1+ϵ_0) H^(t). Also by Lemma <ref>, we have (λ_max(Y) ≥ (1+ϵ_0) ||) ≤ dexp(-ϵ_0^2||/3K/μ). By the similar proof of above, we can obtain that if we choose || = 3K/μlog(d/δ)/ϵ_0^2, it holds with probability at least 1-δ that (1-ϵ_0) H^(t)≼∇^2F(x^(t)) Combining with Eqn. (<ref>) and by the union bound of probability, we can obtain that if we choose || = 3K/μlog(2d/δ)/ϵ_0^2, it holds that (1-ϵ_0) H^(t)≼∇^2F(x^(t)) ≼ (1+ϵ_0) H^(t), with probability at least 1-δ. Finally, the local convergence properties of Algorithm <ref> can be obtained by Theorem <ref> and Theorem <ref>.of Theorem <ref>Let us denote that X_i = [∇^2F(x^(t))+ξ I]^-1/2(∇^2f_i(x)+ξ I) [∇^2F(x^(t)) +ξ I ]^-1/2,Y = ∑_i∈ X_iThen we can obtain thatX_i≤K + ξ/μ+ξBecause ∇^2f_i(x) is chosen uniformly, then we have [Y] = ∑_i∈[X_i] =I.Hence, we can obtain thatλ_max(Y) = λ_min(Y) = . By Lemma <ref>, we have (λ_min(Y) ≤2/3 ||) ≤ dexp(-||/18(K+ξ)/(μ+ξ)).Letting us choose || = 18Klog(d/δ)/ξ, then it holds with probability at least 1-δ that 1/||∑_i∈∇^2f_i(x) + ξ I≽2/3(∇^2F(x^(t))+ξ I)≽2/3(1+ ξ/L) ∇^2F(x^(t)),which implies that ∇^2F(x^(t)) ≼(1+ L - 2ξ/2(L+ξ)) H^(t). Also by Lemma <ref>, we have(λ_max(Y) ≥3/2 ||) ≤ dexp(-||/12(K+ξ)/(μ+ξ)).By the similar proof of above, we can obtain that if we choose || = 12Klog(d/δ)/ξ, it holds with probability at least 1-δ that 1/||∑_i∈∇^2f_i(x) + ξ I ≼3/2(∇^2F(x^(t))+ξ I)≼3/2(1+ξ/μ) ∇^2F(x^(t)),which implies that (1 - 3ξ + μ/3α + 3μ) H^(t)≼∇^2F(x^(t)). Therefore, by choosing || = 18Klog(2d/δ)/ξ, then it holds with probability at least 1-δ that (1 - 3ξ + μ/3ξ + 3μ) H^(t)≼∇^2F(x^(t)) ≼(1+ L - 2ξ/2(L+ξ)) H^(t).of Theorem <ref> Let us denote H_ = 1/|| ∑_i∈∇^2f_i(x), = H_ + λ_r+1 I, where λ_r+1 is the (r+1)-th largest eigenvalue of ∇^2F(x^(t)). By the proof of Theorem <ref> and Eqn. (<ref>), if we choose || = 18Klog(d/δ)/λ_r+1, then we have H_≽2/3∇^2F(x^(t)) - λ_r+1/3 I. Moreover, by Eqn. (<ref>) and choosing || = 12Klog(d/δ)/λ_r+1, we can obtain that H_≼3/2∇^2F(x^(t)) + λ_r+1/2 I. By Corollary 7.7.4 (c) of <cit.>, Eqn. (<ref>) and (<ref>) imply that 1/3λ_r+1≤λ_r+1(H_) ≤ 2λ_r+1.Let us express the SVDof H_^(t) as followsH_^(t) = UΛ̂U^T = U_r Λ̂_r U^T_r + U_∖rΛ̂_∖rU_∖r^T.Then H^(t) can be represented asH^(t) = H_ + U [ [ 0 0; 0 λ_r+1(H_) I - Λ̂_∖r ]] U^T.By Eqn. (<ref>) and 1/3λ_r+1≤λ_r+1(H_) (Eqn. (<ref>)), we haveH^(t)≽2/3∇^2F(x^(t)) - λ_r+1/3 I + U [ [ 0 0; 0 λ_r+1(H_) · I - Λ̂_∖r ]] U^T≽2/3∇^2F(x^(t))which implies that∇^2F(x^(t)) ≼(1+1/2)H^(t). By Eqn. (<ref>) and (<ref>), we haveH^(t)≼ 3/2∇^2F(x^(t)) + λ_r+1/2 I + U [ [ 0 0; 0 λ_r+1(H_) I - Λ̂_∖r ]] U^T ≼ 3/2∇^2F(x^(t)) + 5/2λ_r+1 I ≼ (3/2 + 5λ_r+1/2μ) ∇^2F(x^(t))which implies that(1 - 5λ_r+1 + μ/5λ_r+1 + 3μ) H^(t)≼∇^2F(x^(t)). Therefore, if choosing || = 18Klog(2d/δ)/λ_r+1, we can obtain that(1 - 5λ_r+1 + μ/5λ_r+1 + 3μ) H^(t)≼∇^2F(x^(t)) ≼(1+1/2)H^(t).The convergence properties can be derived directly by Theorem <ref>.
http://arxiv.org/abs/1702.08124v2
{ "authors": [ "Haishan Ye", "Luo Luo", "Zhihua Zhang" ], "categories": [ "cs.NA" ], "primary_category": "cs.NA", "published": "20170227020739", "title": "Approximate Newton Methods" }
Analytical calculation of the axis angle 2V from extinction measurements on the spindle stage F. Dufey Received: date / Accepted: date ============================================================================================= A concise derivation of the "Joel equations", which allow for the determination of the axis angle 2V from measurements of extinction directions on a spindle stage, is provided starting from the wave-equation. Only analytic methods and no geometric arguments referring to stereographic projections are invoked. For error free data, the resulting equations allow for a closed form solution. If angle data with measurement error are to be used, a maximum likelihood estimation methodology is proposed, which may be solved using e.g. iterative reweighting. The method is tested with and compared topublished data. § INTRODUCTION Since its introduction in the beginning 1950s,the one-circle or spindle stage goniometer for the determination of the angle 2V between the optical axis has become one of the standard methods for this purpose.While the determination of the axis angle relied in the first years on rather complicated geometric constructions in the stereographic projection, the technique gained further acceptance when it became possible to solve the underlying equation on a computer <cit.>.The derivation of these equations is not easy to comprehend today, as it involves a mix of geometric and analytic constructs, like the equivibration curve,and is scattered several articles <cit.>.It is one goal of this notice to give a concise derivation of these equations in a purely analytical fashion starting from the Fresnel equation.The second result to be presented is that these equations principally allow for a closed form solution for the components of the inverse dielectric tensor Φ, and hence also for 2V, when error free extinction positions are available and to derive a maximum likelihood estimator for the components of the inverse dielectric tensor, when data prone to statistical error are used. § DERIVATION OF A FORMULA FOR THE ANGLE 2V The tensor Φ is the inverse of the optical dielectric tensor,Φ=ϵ^-1.It can be decomposed as Φ=A_2 1 -1/2 (A_1-A_3) φ,Here, A_i=1/ϵ_i=1/n_i^2 (i ∈{1,2,3}) where ϵ_i are the principal components of the dielectrictensor and n_i the main indices of refraction.The A_i are ordered as A_1>A_2>A_3 (uniaxial crystals are not considered here). The tensor φ=a_1a_2^T+a_2a_1^T is made up from the two vectors a_1 and a_2 of unit length which are perpendicular to the two circular sections of the indicatrix and parallel to the optical axes.We are interested in calculatingthe tensor Φ and the angle 2V between the vectors a_1 and a_2, but will not use these vectors any further in the course of the following calculations.According to <cit.>, eq. 21, the directions of extinction coincide with the dielectric displacement vector D, which fulfills the wave equation(1-n̂n̂^T)Φ D =1/n^2D,where n̂=k/|k| is the unit vector parallel to the wavevector kof the wave.The resulting equation for the refractive index n is Fresnel's equation, but we are more interested in the measured direction ofD, which is always perpendicular to n̂.So if we hold n̂ and D fixed and choose the coordinate axes as e_y=n̂ and D || e_z, (the direction e_x is then parallel to the second possible polarisation.) the equation for the z-component is trivially fulfilled and the equation for the y-component can always be fulfilled with a suitable choice of n, leaving the only restriction, that the left hand side does not introduce a y-component for D, or,Φ_xy=e_y^TΦ e_x=0.Now for any two vectors q and q' which fulfill (q+q')|| e_x and (q-q')||e_y, we get(q+q')Φ(q-q')=0,which are the equations given by Joel. Before solving eq. <ref>, we note that the tensor Φ is a symmetric tensor which can be parametrized by 6 constants.It is clear that the direction of D cannot depend on the isotropic part of Φ, Φ_iso= 1 (Φ_xx+Φ_yy+Φ_zz)/3 and is also invariant under a rescaling Φ→ cΦ with an arbitrary constant c. Therefore, we can hope at best to recover from measurements on the spindle stage the anisotropic part Φ_aniso=Φ -Φ_iso which depends on 5 parameters of which one may be chosen at will to fix the scaling.An explicit expression for Φ_aniso is Φ_aniso= ([d_x^2-y^2-d_z^2 d_xy d_xz; d_xy -d_x^2-y^2-d_z^2 d_yz; d_xz d_yz 2d_z^2 ]),withd_z^2 = (2Φ_zz-Φ_xx-Φ_yy)/6, d_x^2-y^2 = (Φ_xx-Φ_yy)/2, d_xy = Φ_xy, d_xz = Φ_xz, d_yz = Φ_yz.Various orientations parameterized by the spindle angle S have to be analysedwhich results in the substitution. Φ→Φ̃(S, E_S)=R_y(E_S)R_z(S) Φ R^T_z(S)R^T_y(E_S).Here, R_y and R_z are the usual rotation matrices around the y- and z-axes, respectively,R_y=( [cos E_S0 -sin E_S;010;sin E_S0cos E_S ]), and R_z=( [cos Ssin S0; -sin Scos S0;001 ]). Specifically, z is chosen to be the spindle axis and y the direction perpendicular to the rotation desk of the microscope with S and E_S being the respective rotation angles.Substitution into eq. <ref> yields an implicit equation for the equivibration curve E_S(S),-3d_z^2tan 2E_S+2d_xzcos S+ 2d_yzsin S+ + d_x^2-y^2cos 2 Stan 2E_S+d_xysin 2Stan 2E_S=0. Hence one of thed-terms may be fixed advantageously, e.g. d_z^2=1.Then, if measurements are made for 4 angles E, this linear system for the other d_i can be solved exactly.However, if more than 4 angles are measured, then some regression technique has to be employed. The programs available <cit.> minimize an ad hoc sum of squared Joel equations.This arbitrarily assigns weights (=1) to all observations.To put the regression on a statistical basis, it seems more appropriate to start from a reasonable error model for the dependent variables E_S.With the expectation value of the angle E_S being 1/2 arctanV/W where V=2d_xzcos S+ 2d_yzsin S and W=3d_z^2-d_x^2-y^2cos 2 S-d_xysin 2S,we assume the measurement error of the angles to follow a von Mises distribution <cit.>, f(2E_S)= exp(±κcos(2E_S-arctan(V/W)))/2π I_0(κ)where κ is a dispersion parameter and I_0 is a modified Bessel function. The sign of the exponent depends on which of the two solutions E_S is chosen with points on the same extinction curve giving rise to the same sign.The parameter κ may be estimated as the empirical variance of the observed angles from the calculated values.The likelihood to be maximized with respect to the parameters d_j, j∈{x^2-y^2,xy, xz, yz} given measured values E_Si from one of the two extinction curves with i∈ [1,n]is thenL({d_j})=±κ∑_i=1^N W_icos 2E_Si +V_i sin 2E_Si/√(V_i^2+W_i^2)The optimum is attained when the four equations 0=∂ L/∂ d_j=±κ∑_i (W_i sin 2E_Si-V_i cos2E_Si) (V_i^2+W_i^2)^-3/2∂ V_iW_i/∂ d_jare fulfilled, which can be seen to be weighed sums of the original Joel equations.Hence, iterative reweighting seems to be a promising and numerically stable option to obtain maximum likelihood estimates of the parameters d_j.The second derivatives of L with respect to the d_j yield the information matrix from which estimates of the variances of the d_j may be obtained. The eigenvectors of Φ_aniso, constructed from the estimates d_j,are the directions of the main axes of the indicatrix,corresponding to the eigenvalues μ_h, μ_m and μ_l, ordered by decreasing value.From the latter, the axis angle 2V can be calculated <cit.>,tan^2V=μ_h-μ_m/μ_m-μ_l.It is obvious that this result is not influenced by neither aconstant scaling nor a shift of all eigenvalues. § COMPARISON WITH PUBLISHED DATA To exemplify the feasibility of the method, angles 2V calculated from the measured angles E_S taken fromthearticles <cit.> are being compared to values 2V reported there: <cit.>, Fig. 5: 64.09^∘ (63.66^∘)<cit.>, Fig. 2: 76.88^∘ (77.57^∘)<cit.>, Fig. 4: 48.98^∘ (49.36^∘)The numerical optimization of the likelihood eq. <ref> was performed using the optmodel procedure of<cit.>.Expected values E_S calculated with these parameters compare very well with the reported ones with the mean squared deviations from the measured E_S being even slightly smaller. apalike
http://arxiv.org/abs/1703.00070v1
{ "authors": [ "Florian Dufey" ], "categories": [ "physics.comp-ph", "physics.chem-ph", "physics.data-an" ], "primary_category": "physics.comp-ph", "published": "20170227095050", "title": "Analytical calculation of the axis angle 2V from extinction measurements on the spindle stage" }
x X Δ= 𝒳theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corollary[theorem]Corollary conjecture[theorem]Conjectureproof[1][Proof]#1 Row-Centric Lossless Compression of Markov Images Matthew G. Reyes and David L. Neuhoff EECS Department, University of Michigan <mgreyes@umich.edu> and <neuhoff@umich.edu> December 30, 2023 ===================================================================================================================================================================empty Motivated by the question of whether the recently introduced Reduced Cutset Coding (RCC) <cit.> offers rate-complexity performance benefits over conventional context-based conditional coding for sources with two-dimensional Markov structure, this paper compares several row-centric coding strategies that vary in the amount of conditioning as well as whether a model or an empirical table is used in the encoding of blocks of rows. The conclusion is that, at least for sources exhibiting low-order correlations, 1-sided model-based conditional coding is superior to the method of RCC for a given constraint on complexity, and conventional context-based conditional coding is nearly as good as the 1-sided model-based coding.§ INTRODUCTION Lossless coding of an image involves blocking (equivalently, grouping) and ordering the pixels in some way, and feeding them, together with a corresponding set of coding distributions, to an encoder, which without loss of optimality we can assume to be an Arithmetic Encoder. The coding distribution for a given pixel, or block of such, is conditioned on some subset of thepixels, referred to as its context, that have already been encoded.This paper considers how various coding strategies effect coding rate and complexity. Strategies considered include different ways of blocking and ordering pixels, different contexts,and two different ways of producing coding distributions: model-based andempirical, i.e., parametric and nonparametric. For simplicity we focus on bilevel images. To provide a well-founded testing ground with interesting correlation structure, we focus on images produced by a simple, uniform, Ising Markov Random Field (MRF) model <cit.>, with each pixel having four neighbors (N, E, S, W), and positive,row-stationary edge correlations. MRF models have seen widespread application in image processing,in large part due to the reasonable assumption that pixels in an image are dependenton some small surrounding region rather than on pixels from the entire rest of the image.In particular, the Ising model has been proposed as a model for bilevel images <cit.> called scenic, which are complex bilevel images, such as landscapes and portraits, having numerous black and white regions with smooth or piecewise smooth boundaries between them.The model-based coding distributions are based explicitly on this model. The empirical methods simply use tables of conditional frequencies. We focus on what we call row-centric schemes,which are schemes in which rows are grouped into blocks and within each block, columns are sequentially encoded from left to right.These include both the recently introduced Reduced Cutset Coding (RCC) <cit.>, as well as conventional context-based conditional coding, such as in <cit.>. It excludes coding techniques such as when image pixels are coded in Hilbert scan order <cit.>.By the Markov property, no coding scheme could attain lower rate than the scheme that encodes each row with coding distribution equal to the row'sconditional distribution given the previous row as context, which has rate equal to the entropy-rate H_∞ = 1W H(𝐗_r_1 | 𝐗_r_0), where W is the width of the image, and 𝐗_r_0 and 𝐗_r_1 denote successive rows. An equivalent row-centric scheme will sequentially encode each pixel in a given row with context equal to the pixel to the left, the one above, and all pixels to the right of the one above. While it is easy to say this is optimal, it is computationally infeasible to attain this rate exactly.On the one hand, with model-based coding distributions, due to need for marginalizing over the pixels below the block, the conditional distribution of one pixel given the aforementioned context is exorbitantly complex to compute in real time, and exorbitantly expensive to store even if it were computed in advance. On the other hand, withempirical-distribution-based coding, the distribution is again exorbitantly expensive to store. Thus, the real question is how to approach rate H_∞ with computationallyefficient coding techniques.With this in mind, this paper explores the merits of several row-centric strategies – some using model-based coding distributions and some using empirically-based distributions.From now on,we call these model-based and empirically-based schemes, respectively.§.§ Model-based Let G = (V,E) denote a grid-graph underlying the MRF.For the given Ising model, one can encode an N_b × W block 𝐗_b consisting of N_b rows with complexity per pixel that increases exponentially with N_band with storage that increases exponentially with N_b and linearly with W. This is done by lumping the i-th column _b,i into one super-pixel, and computing the coding distribution of each column in turn using Belief Propagation on the resulting line graph.This is feasible for moderate N_b, e.g., 10 or so, and as described below, such coding distributions can be computed with conditioning/context from the row above, the row below, both the row above and row below, or from neither, without any appreciable increase in complexity.If the coding distributions within a block are conditioned on just the row above, then to avoid an exorbitantly complex marginalization, all edges running South from the block must be cut.This means that the computed coding distribution p_C( 𝐱_b,i ) will not be the true conditional distribution for the i-th column – the result being that the overall coding rate will be larger than H_∞ due to the divergences between the true and computed conditional distributions for the columns. Similarly, if the coding distributions within a block are computed without conditioning on either the row above or the row below, then all edges running both South and North from the block must be cut. This again means that the computed coding distributions p_C(𝐱_b,i ) will not be the true distributions for the columns – the result being that the overall coding rate will exceed H_∞ due both to the divergences between true and computed distributions for the columns, and the blocks being encoded independently of one another. We refer to these methods as 1-sided and 0-sided model-based coding, respectively. In each of these, the excess rate, i.e., redundancy, decreases as N_b increases, and for each of these, the divergence can be minimized by choosing an appropriate moment-matching correlation for the truncated model.Two-sided model-based coding of a block of rows is also possible, but unlike 0- and 1-sided coding, this cannot be applied to the entire image.For example in RCC, blocks are alternately 0-sided coded and 2-sided coded.On the one hand, the blocks that are 0-sided coded suffer the sources of redundancy mentioned previously.On other hand, the blocks that are 2-sided coded are coded precisely at rate 1W N_b H(𝐗_b | 𝐗_S, 𝐗_N), where X_S and X_N denote the rows just North and just South of X_b, respectively. While this was called RCC in <cit.>, here we refer to it as 0/2-sided coding.§.§ Empirically-based With empirically-based coding, there could again be 0-, 1- or 2-sided coding. However, in this paper we only consider 1-sided coding, where the pixels in a row are sequentially coded from left to right with context consisting of the pixel to the left and some number of pixels in the row above, beginning with the pixel directly above and extending some number of pixels to the right. (This is conventional context-based coding.) While H_∞ could be attained if all pixels to the right of the current pixel in the row above were inthe context, the storage required for the empirical coding distribution increases exponentially with the size of the context, so the size of the context must be limited to a moderate amount, for example 10. And assuming a sufficient amount of training data that the empirical conditional distributions are very close to the true conditional distributions, the resulting redundancy is theaverage of the divergences of the true conditional distribution of a pixel given all values on the previous and the true conditional distribution given the moderately sized context. §.§ Summary of main results In regard to trying to attain H_∞ with 1-sided row-centric coding, we note that empirically-based coding uses a true distribution with a truncated context, whereas model-based coding uses an approximate distribution with full context. Moreover, 1-sided model-based coding uses an approximate distribution on all blocks, while the 0/2-sided coding of RCC uses a more severe approximation on half the blocks and an optimal distribution on the other half. Consequently, we are interested in the relative performances of these three approaches in achieving rate as close to H_∞ as possible.In this paper, we first compare 0/2-sided model-based coding with 1-sided model-based coding, and then 1-sided model-based coding with 1-sided empirical-based coding. 1-sided model-based coding has rate decreasing monotonically with N_b. For a given complexity, i.e.,N_b, 1-sided model-based coding outperforms 0/2-sided model-based coding. Moreover, 1-sided model-based coding outperforms 1-sided empirical-based coding, though not by much. In summary, at least for Markov models exhibiting low-order correlations, there are both model-based and empirically-based 1-sided schemes with good performance and low complexity. The remainder of the paper is organized as follows. In Section <ref> we cover background on the Ising model, Arithmetic Encoding, model- and empirical-based coding distributions and Reduced Cutset Coding. In Section <ref>, we discuss 0-, 1-, and 2-sided coding, and in Section <ref> we discuss numerical results. § BACKGROUND In this section we introduce notation and background concepts and results. §.§ MRF Source Model The specific information source that we consider in the present paper is a uniform Ising model on a square grid graph G=(V,E), whose nodes V are the sites of an M× W rectangular lattice and whose edges E are pairs of horizontally and vertically adjacent nodes. The random variable X_i associated with each node i assumes values in the alphabet ={-1,1} and a configuration = (x_i:i∈ V) has probabilityp(;θ) =exp{ θ∑_{i,j}∈ Ex_ix_j - Φ(θ)},whereΦ(θ) is the log-partition function and θ > 0 is the positive edge correlation parameter of the model. §.§ Row-Centric Arithmetic Coding As mentioned in the introduction, in row-centric coding, rows are grouped into N_b× W blocks and then within a block _b, columns of pixels are encoded from left to right. Let r_1 and r_N_b denote the first and last rows, respectively, of a block. Similarly, let r_0 and r_N_b+1 indicate, respectively, the row preceding and row succeeding the block. When coding column configuration _b,i, a coding distribution is passed, together with the configuration _b,i, to an Arithmetic Encoder. In 0-sided coding p_C(_b,i) is conditioned only on _b,i-1, the configuration of the the i-1-st column of the block. In 1-sided model-based coding, p_C(_b,i) is conditioned on _b,i-1 and _r_0,i:W, the i-th through final pixels of the previous row. In 1-sided empirical-based coding, p_C(_b,i) is conditioned on _b,i-1 and _r_0,i:i+c-2, the i-th through i+c-2-th pixels of the previous row, where c is the size of the context. In 2-sided model-based coding, p_C(_b,i) is conditioned on _b,i-1, _r_0,i:W, and _r_b+1,i:W, the i-th through final pixels of the next row. The contexts for these schemes can be visualized with Figure <ref>.The approximate number of bits produced by the AC encoder when encoding the i-th column is -log p_C(_b_i). The rate R_b,i of encoding the i-th column of block b is the expected number of bits produced, divided by N_b. If the p(_b_i) is the true (conditional) distribution of column i given the context, then the rate of encoding the i-th column isR_b,i = 1/N_b[ H(_b,i | C_b,i) + D( p(_b,i) || p_C(_b,i)) ].where D denote divergence. From this, the rate of encoding block b is R_b = 1/W N_b[ H(_b | C_b) + D],where D is the sum of the per-column divergences.§.§ Model and empirical based coding distributions For model-based methods, the coding distribution is computed by running BP on the Ising model restricted to the subgraph induced by the block of rows, with a possibly modified correlation parameter. In the 0- and 1-sided cases, the edge correlation parameter is adjusted to account for the truncated edges (on both sides of the block or below the block, respectively). In the case of 1- and 2-sided coding, in which conditioning on either the upper or both the upper and lower boundaries is part of the coding distribution, this conditioning is incorporated by introducing self correlation on the bottom and top rows of the block that bias those sites toward the value of their boundary neighbor. Let θ^*_0,N_b and θ^*_1,N_b denote the parameters used for encoding a block with 0-, respectively, 1-sided coding. For 2-sided, the block is encoded using the original parameter θ, and the model becomes p(_b | _r_0,_r_b+1;θ^*_2,N_b) =exp{ θ^*_2,N_b∑_{i,j}∈ E_b x_ix_j + θ^*_2,N_b∑_{i}∈ r_1∪ r_b s_ix_i - Φ(θ^*_2,N_b)}, where E_b is the set of edges both of whose endpoints are in b, and s_i is the self-correlation on pixel i corresponding to the value of its neighbor on the boundary of b.For 1-sided coding, the model is p(_b | _r_0;θ^*_1,N_b) =exp{ θ^*_1,N_b∑_{i,j}∈ E_b x_ix_j + θ^*_1,N_b∑_i∈ r_1s_ix_i - Φ(θ^*_1,N_b)}For 0-sided coding, the model is p(_b;θ^*_0,N_b) =exp{ θ^*_0,N_b∑_{i,j}∈ E_b x_ix_j - Φ(θ^*_0,N_b)}, In each of these cases, the coding distribution p(_b,i) for the i-th column within the block is computed using Belief Propagation <cit.>.Messages are first passed from right to left on the resulting line-graph of superpixels (columns) in such a way that after the messages are received at the first column, encoding can proceed from left to right with the coding distributions being computed as they are needed.The (column) coding distributions for 0-, 1-, and 2-sided model-based coding are denoted p(_b,i | _b,i-1; θ^*_0 ), p(_b,i | _b,i-1, _r_0,i:W ; θ^*_1), and p(_b,i | _b,i-1, _r_0,i:W, _r_b+1,i:W ; θ^*_2), respectively.Empirical coding distributions are based on a table of the frequencies of different configurations of a column for all possible configurations of the context. Letting _T denote the configuration being encoded and _C denote the configuration of the context, the table consists of values of the form p^*(_T , _C), from which the coding distribution p^*(_b,i | _b,i-1, _r_0,i:i+c-2) can be computed, where c is the size of the context.There are 1-pass and 2-pass methods. In this paper we consider only the 2-pass method in which the relevant frequencies are collected from a set of training images, and then, in a second pass, the rows of the image are encoded using the collected frequencies as coding distributions. §.§ Reduced Cutset Coding <cit.>In the Reduced Cutset Coding (RCC) method introduced in <cit.> and further analyzed in <cit.>, an image is divided into alternating blocks of rows _L and _S of sizes N_L × W and N_S × W, called lines and strips, respectively. Lines are encoded first in a 0-sided manner, i.e., with no conditioning. The parameter θ^*_0,N_L used for the coding distributions of columns is chosen to be the one that minimizes divergence with the true distribution of lines. It is referred to as the moment-matching correlation parameter. The coding rate for lines isR^L_N_L=1/W N_L[ H(_L ; θ^*_0,N_L + D) ],where D is the divergence between p(_b;θ) and p(_b;θ^*_0,N_L).Strips are subsequently encoded in a 2-sided manner, i.e., conditioned on the immediately preceding and immediately succeeding rows. The coding rate for a strip isR^S_N_S=1/W N_SH(_S|_r_0, _r_N_b+1;θ^*_2,N_S). For a large image, the overall rate of RCC is thenR_N_S,N_L ≈ N_S/N_S + N_LR^S_N_S + N_L/N_S + N_LR^L_N_L≈H_∞+ N_L/N_L + N_SD + N_S/N_L + N_SI(_r_0 ; _r_N_L+1)where D is the divergence between p(_b;θ) and p(_b;θ^*_0,N_L), and I(_r_0 ; _r_N_L+1) is the information between the row immediately preceding and the row immediately following a strip. § ROW-CENTRIC CODING REDUNDANCY In this section we return to the question posed in Section <ref>, that of attaining rate as close as possible to the entropy rate H_∞ = H(_r_1 | _r_0), and discuss the redundancies associated with the different coding strategies considered in this paper. While we cannot analytically evaluate the rate of decrease of the redundancies, by performing numerical experiments as in the next section, we can gain a sense of the relative rates of decrease.We let R^0E_N_b and R^0M_N_b denote the rate for coding N_b rows with 0-sided empirical- and model-based coding, respectively. Likewise for R^1E_N_b, R^1M_N_b, and R^2M_N_b. We focus here on the coding of a single row, i.e., N_b = 1. Moreover, let I(_r_1 ; _r_0) be the mutual information between rows 0 and 1. Some of the results in this section make use of Lemma <ref> in Section <ref>.The rate for encoding a row with 0-sided model-based coding isR^0M_1 = H_∞ + 1/W [ D^0M_1 + I(_r_1 ; _r_0) ] where D^0M_1 is the sum of divergences between p(_b,i | _b,i-1; θ ) and p(_b,i | _b,i-1; θ^*_0 ) over all columns. R^0M_1 =1/W[ H(X_b) + D(X_b || X̃_b) ]=1/W[ H(X_b | X_r_0) + I(X_r_1 ; X_r_0) + D(X_b || X̃_b) ]= H_∞ + 1/W[ I(X_r_1 ; X_r_0) + D(X_b || X̃_b) ], which shows the proposition. The rate for coding a row with 0-sided empirical-based coding isR^0E_1 = H_∞ + 1/W [ I(_r_1 ; _r_0) ]. R^0E_1 =1/W H(_b)=1/W[ H(_b | _r_0) + I(_r_1 ; _r_0)]= H_∞ + 1/W I(_r_1 ; _r_0), which shows the proposition. Note that both 0-sided methods suffer the information penalty for independently encoding rows of the image. However, we do not include a divergence term in R^0E_1 because given enough training data, the empirical coding distribution p^*(_b,i | _b,i-1) for the i-th column will well-approximate the true distribution p(_b,i | _b,i-1; θ ). Thus one could estimate D̅^0M_N_b by encoding the source with both 0-sided model-based coding and 0-sided empirical-based coding and forming the estimate R^0M_N_b - R^0E_N_b.The rate for coding a row with 2-sided model-based coding is R^2M_1 = H_∞ - 1/WI(_r_1 ; _r_2 | _r_0)< H_∞, R^2M_1 =1/WH(_b | _r_0,_r_2)=1/W[ H(_b | _r_0) - I(_b ; _r_w | _r_0) ]= H_∞ + 1/WI(_r_1 ; _r_2 | _r_0) This, of course, is not an actual coding rate, but it can be shown that when combined with R^0M_1 gives the performance of RCC with N_L = N_S = 1. Encoding every other row with 0-sided model-based coding and 2-sided model-based coding gives rate 1/2[ R^0M_1 + R^2M_1 ] = H_∞ + 1/2 WD̅^0M_1 + 1/2 W I(_r_2 ;_r_0) 1/2[ R^0M_1 + R^2M_1 ] =1/2 H_∞ + 1/2 W[ I(_r_1 ; _r_0) + D(_b || _b) ] + 1/2[ H_∞ - 1/WI(_r_1 ; _r_2 | _r_0) ]= H_∞ + 1/2 WD̅^0M_1 + 1/2 W[ I(_r_1 ; _r_0) - I(_r_1;_r_2 | _r_0) ]. Therefore, to show the proposition we need to show that I(_r_1 ; _r_0) - I(_r_1;_r_2 | _r_0) = I(_r_2 ; _r_0). To do this, we note that under a row stationary Markov model such as the one considered in this paper, we have I(_r_1 ; _r_0) - I(_r_1;_r_2 | _r_0) = H(_r_1) - H(_r_1 | _r_0) - H(_r_2 | _r_0) + H(_r_2 | _r_0, _r_1)= H(_r_1) - H(_r_1 | _r_0) - H(_r_2 | _r_0) + H(_r_2 | _r_1)= H(_r_2) - H(_r_2 | _r_1) - H(_r_2 | _r_0) + H(_r_2 | _r_1)= H(_r_2) - H(_r_2 | _r_0)= I(_r_2 ; _r_0) where (<ref>) is from the Markov property and (<ref>) is from row stationarity. This completes the proof.By estimating D̅^0M_N_b using the rates R^0M_N_b and R^0E_N_b from 0-sided model-based and 0-sided empirical-based coding, we can then subtract this from the rate of RCC and obtain an estimate of the shape of I(_r_0 ; _r_N_b + 1). Using the above notation, we can restate Proposition 3.1 of <cit.>, for all N_0 and N_2, as RCCR^0M_N_0+1 < R^0M_N_0,        R^2M_N_2+1 > R^2M_N_2,        R^0M_N_0 > R^2M_N_2. The proofs can be found in <cit.>.We now consider rates of 1-sided coding.The rate for encoding a row with 1-sided model-based coding is R^1M_1 = H_∞ + 1/WD^1M_1 where D^1M_1 is the sum of divergences between p(_b,i | _b,i-1, _r_0,i:W ; θ) and p(_b,i | _b,i-1, _r_0,i:W ; θ^*_1) over all columns. R^1M_1 =1/W[ H(X_r_1 | X_r_0) + D( X_r_1 | X_r_0 || X̃_r_1 | X_r_0) ], where D( X_r_1 | X_r_0 || X̃_r_1 | X_r_0) is the divergence between the true conditional distribution of a row conditioned on the previous row and the conditional distribution of a row conditioned on the previous row using the 1-sided model, which can be expressed as the sum of divergences between p(_b,i | _b,i-1, _r_0,i:W ; θ) and p(_b,i | _b,i-1, _r_0,i:W ; θ^*_1). This shows the proposition.Similarly, the rate of encoding a row with 1-sided empirical-based coding is The rate for encoding a row with 1-sided empirical-based coding is R^1E_1 = H_∞ + 1/WD^1E_1 where D^1E_1 is the sum of divergences between p(_b,i | _b,i-1, _r_0,i:W ; θ) and p^*(_b,i | _b,i-1, _r_0,i:i+c-2) over all columns. R^1M_1 =1/W[ H(X_r_1 | X_r_0) + D( X_r_1 | X_r_0 || X̃_r_1 | X_r_0) ], where D( X_r_1 | X_r_0 || X̃_r_1 | X_r_0) is the divergence between the true conditional distribution of a row conditioned on the previous row and the conditional distribution of a row conditioned on the previous row using the 1-sided empirical distributions, which can be expressed as the sum of divergences between p(_b,i | _b,i-1, _r_0,i:W ; θ) and p^*(_b,i | _b,i-1, _r_0,i:i+c-2). This shows the proposition. Note that the two 1-sided coding scemes do not suffer an explicit information penalty because there is conditioning on the previous row. On the other hand, if the context size c could be chosen as c = W + 2 - i for each column i, then the divergence term D^1E_1 would vanish. Thus D^1E_1 is really a sum of conditional information terms. However, both D^1M_1 and D^1E_1 are less than D^0M_1, so it is of interest how these smaller divergences on all blocks compare with the 0/2-sided scheme of RCC in which half the blocks have a larger divergence, plus an information penalty, while the other half actually receive a coding rate reduction.Analogous to the results of <cit.>, 1-sided model-based coding can be shown to have the following properties. For all N_b and N_2, R^1M_N_b+1 < R^1M_N_b       R^1M_N_b < R^0M_N_b       R^1M_N_b > R^2M_N_2 § NUMERICAL RESULTS AND COMPARISONS Using Gibbs sampling, we generated configurations ^(1),…,^(17) of a 200× 200 modeled by an Ising MRF with θ = .4. On this dataset we tested three strategies: 0/2-sided model-based coding, 1-sided model-based coding, and 1-sided empirical-based coding. The estimates θ^*_0,n, θ^*_1,n, and θ^*_2,n were found as in <cit.> and are shown in Figure <ref>. Figures <ref> and <ref> show the rates attained by the various row-centric coding schemes considered in this paper, as a function of block size parameter n. These rateswere computedby averaging the negative logarithm of the coding distributions evaluated at the actual pixel/super-pixel values. In <cit.> we observed that for a given complexity, i.e., given the maximum of N_L and N_S, the best performance of 0/2-sided coding was found when lines and strips have the same size, i.e., N_L = N_S = N_b. Thus in the model-based comparison, our 0/2-sided method uses lines and strips of equal height. As predicted by Proposition <ref>, Figure <ref> shows that R^1M_N_b is decreasing in N_b, R^1M_N_b < R^0M_N_b and R^1M_N_b > R^2M_N'_b for all N_b and N'_b. Also in Figure <ref>, we observe that for a given block size N_b, 1-sided model-based coding achieves lower rate than 0/2-sided model-based coding. Indeed, 1-sided model-based coding with N_b=1 nearly as good as 0/2-sided coding with N_b=7. Moreover, using the 2-sided coding rate as a lower bound for H_∞, we can say that with N_b = 3, 1-sided model-based coding comes to within 3.5% of H_∞.Figure <ref> shows the rate of 1-sided model-based coding with N_b = 1, and 1-sided empirical-based coding for varying sizes of context. Note that context size c = 1 actually corresponds to 0-sided empirical-based coding, since in this scheme, only the pixel to the left is used as context. We observe that 1-sided model-based coding with N_b=1 achieves lower rate than 1-sided empirically-based coding with all context sizes we considered. The difference between the rates of 1-sided model-based and 1-sided empirical-based coding shrinks with context size and when the context size is 5, the difference is about .0025 bpp or .4%. Improvements after that are very slow. Again using the rate of 2-sided model-based coding as a lower bound for H_∞, we observe that 1-sided empirically-based coding with context size 5 comes with 4% of entropy-rate.Another interesting observation is made by recalling from the previous section that while both 0-sided model-based and 0-sided empirical-based coding methods suffer an information penalty, the model-based scheme suffers an additional divergence penalty D^0M_N_b. Therefore, by comparing the n=1 point on the 0-sided rate curve of Figure <ref> with the c=1 point on the empirical-based rate curve of Figure <ref>, we can estimate that the normalized divergence between p(_b;θ) and p(_b;θ^*_0) for a single row is about .1 bits per pixel. Moreover, by again using the 2-sided model-based rate curve as a lower bound for H_∞, we can bound the normalized information I(_2 ; _1) between successive rows by .041 bits per pixel.§ CONCLUDING REMARKS In this paper we posed the problem of considering different approaches to what are called row-centric coding. We presented the problem in the context of a standard MRF image model in order to provide a well-founded testing ground in which model-based and empirical-based approaches can be compared, and moreover, 1-sided coding can be compared to the tradeoffs in 0/2-sided coding. § APPENDIX For random variables _1,…,_N, N≥ 2, let p_i|C_i be the probability of _i given _C_i, where C_i⊂{1,…,i-1} is the context for _i and let q_i|C̅_i be the coding distribution for _i, where C̅_i⊂{1,…,i-1} is the context for _i under the q distribution. Then, D(∏_i^N p_i|C_i || ∏_i q_i|C̅_i) =∑_i=1^N ∑__C_i∪C̅_ip_C_i D( p_i|C_i || q_i| C̅_i ) First consider the case where C_i = C̅_i = {1,…,i-1}. We will prove it by induction. Letting N=2 we have that D( p_1p_1|2 || q_1q_1|2) =∑__1 _2 p_1p_2|1logp_1 p_2|1/q_1 q_2|1 =∑__1, _2 p_1 p_2|C_2[ logp_1/q_1 + logp_2|C_2/q_2|C̅_2]=∑__1, _2 p_1 p_2|C_2logp_1/q_1 + ∑__1, _2 p_1 p_2|C_2logp_2|C_2/q_2|C_2 =∑__1 p_1 logp_1/q_1∑__2 p_2|C_2 + ∑__1 p_1 ∑__2 p_2|C_2logp_2|C_2/q_2|C̅_2 =∑__1 p_1 logp_1/q_1 + ∑__1 p_1 ∑__2 p_2|C_2logp_2|C_2/q_2|C̅_2 =∑_i=1^2 ∑__1,…,_i-1 p_1,…,i-1∑__i p_i | C_ilogp_i | C_i/q_i | C̅_i =∑_i=1^2 ∑__1,…,_i-1 p_1,…,i-1∑__i D( p_i | C_i || q_i | C̅_i), which shows that the lemma holds for some N = k≥ 2. Now letting N = k+1, we see that D(∏_i^k+1 p_i|C_i || ∏_i q_i|C̅_i) =∑__1,…,_k,_k+1∏_i=1^kp_i | C_i p_k+1 | C_k+1log∏_i=1^k p_i | C_i p_k+1 | C_k+1/∏_i=1^k q_i | C̅_i q_k+1 | C_k+1 =∑__1,…,_k∏_i=1^k p_i | C_ilog∏_i=1^k p_i | C_i/∏_i=1^k q_i | C̅_i+ ∑__1,…,_k∏_i=1^k p_i | C_i∑__k+1 p_k+1 | C_k+1logp_k+1 | C_k+1/q_k+1 | C̅_k+1 =∑__1,…,_k∏_i=1^k p_i | C_ilog∏_i=1^k p_i | C_i/∏_i=1^k q_i | C̅_i+ ∑__1,…,_k∏_i=1^k p_i | C_i D(p_k+1 | C_k+1 || q_k+1 | C̅_k+1)=∑_i=1^k ∑__C_i∪C̅_ip_C_i D( p_i|C_i || q_i| C̅_i ) + ∑__1,…,_k∏_i=1^k p_i | C_i D(p_k+1 | C_k+1 || q_k+1 | C̅_k+1)=∑_i=1^k+1∑__C_i∪C̅_ip_C_i D( p_i|C_i || q_i| C̅_i )§ REFERENCES 1reyes2010 M.G. Reyes and D.L. Neuhoff, “Lossless Reduced Cutset Coding of Markov Random Fields", DCC, Snowbird, UT, 2010.reyes2016a M. G. Reyes and D. L. Neuhoff, “Cutset Width and Spacing for Reduced Cutset Coding of Markov Random Fields," ISIT 2016, July 2016.baxter R.J. Baxter, Exactly Solved Models in Statistical Mechanics, New York: Academic, 1982.reyes2014 M. G. Reyes, D. L. Neuhoff, T. N. Pappas, “Lossy Cutset Coding of Bilevel Images Based on Markov Random Fields," IEEE Trans. Img. Proc., vol. 23, pp. 1652-1665, April 2014.JBIG“Progressive bi-level image compression,” ISO/IEC Int. Std. 11544, 1993. MemonW:97 N. Memon and X. Wu, “Recent developments in context-based predictive techniques for lossless image compression,"The Computer J., vol. 40, no. 2, pp. 127-136, 1997.MemonNS:2000 N. Memon, D.L. Neuhoff, and S. Shende, “An analysis of some common scanning techniques for lossless image coding," IEEE Trans. Image Proc., vol. 9, no. 11, pp. 1837-1848, 2000.LempelZiv:1986 A. Lempel and J. Ziv, “Compression of two-dimensional data,” IEEE Trans. Inform. Theory, vol. IT-32, no. 1, pp. 1–8, 1986. reyes2016b M. G. Reyes and D. L. Neuhoff, “Cutset Width and Spacing for Reduced Cutset Coding of Markov Random Fields," available online at http://arxiv.org/abs/1602.04835.
http://arxiv.org/abs/1702.08055v1
{ "authors": [ "Matthew G. Reyes", "David L. Neuhoff" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170226171651", "title": "Row-Centric Lossless Compression of Markov Images" }
Impurity scattering and size quantization effects in a single graphene nanoflake Mikhail Fonin^1[Email address: mikhail.fonin@uni-konstanz.de] December 30, 2023 ================================================================================ In this work exact solutions for the equation that describes anomalous heat propagation in 1D harmonic lattices are obtained. Rectangular, triangular, and sawtooth initial perturbations of the temperature field are considered. The solution for an initially rectangular temperature profile is investigated in detail. It is shown that the decay of the solution near the wavefront is proportional to 1/ √(t). In the center of the perturbation zone the decay is proportional to 1/t. Thus the solution decays slower near thewavefront, leaving clearly visible peaks that can be detected experimentally. § INTRODUCTIONNowadays investigation of nonlinear thermomechanical processes in low-dimensional structures attracts high interest due to the rapid development of nanoelectronical devices based on materials with microstructure <cit.>. Achievements in nanotechnology allowed for an experimental proof of the wave nature and finite propagation velocity of thermal perturbations <cit.>. This can provide a foundation for a universal theory of thermal conduction, applicable both on micro and macroscales. The classical heat equation is a parabolic partial differential equation that describes the distribution of heat in a given region over time,Ṫ = β T”,where T is temperature, β is thermal diffusivity, dot (̇ ̇ ̇)̇ denotes differentiation with respect to t, prime (  )' denotes differentiation with respect to x. The classical heat equation is derived on the basis of the Fourier's law <cit.>,q = -κ∇ T,where κ is thermal conductivity, q is heat flux, and T is temperature. Practical applications show that at the macroscale Fourier's law of heat conduction is well applicable in order to describe temperature processes.However, it predicts an infinite speed of signal propagation, which is paradoxical from a physical point of view. A study of processes at the microscale, when the characteristic length is proportional to several atomic bond lengths, requires us to use more complicated models of heat transfer, which take the finite velocity of heat propagation into account. Well observed abnormalities from the Fourier's law happen in thermal processes occurring in one-dimensional crystalline structures <cit.>. Recent experimental works show length dependence of thermal conductivity of nanostructures <cit.>. Significant deviations from Fourier's law were shown for  and  nanotubes  <cit.>. Thermal anomalies for nanoscale structures can be used in practice for designing perspective devices such as thermal diodes <cit.>. The anomalous nature of heat processes for one-dimensional lattices was demonstrated analytically in paper <cit.>,where a problem of heat flow between two heat baths was considered. A hyperbolic heat equation is one of the alternatives to describe heat processes, which take finite speed of temperature propagation <cit.> into account,τT̈ + Ṫ = β T”,where τ is a relaxation time. However, Eqn. (<ref>) has serious difficulties in describing heat transfer in one-dimensional crystals, since no unique relaxation time can be determined <cit.>. A perspective approach for description of un-steady heat processes in 1D crystals is presented in papers <cit.>. By using correlational analysis, the initial stochastic problem for individual particles is reduced to a deterministic problem for statistical characteristics of the crystal motion. Finally, a continuum equation (<ref>) describing anomalous heat transfer in 1D harmonic lattices is obtained in paper <cit.>. In the current work exact solutions for this equation will be obtained. Solutions for a number of problems such as rectangular, triangular, and sawtooth initial perturbations will be obtained. Properties of the solution for rectangular initial perturbation such as decay behavior and asymptotics of the wavefront will be investigated. These results can be used for analysis of the anomalous heat transfer in more complex systems, such as 1D crystals on elastic foundation <cit.> and 2D-3D crystals <cit.>. An understanding of the anomalous heat conduction is important for analysis of the experimental results, which are to be obtained in the nearest future due to the rapid development of nanotechnologies.§ LOCALIZED PERTURBATIONS IN A HARMONIC CHAINThe harmonic chain is a simple and powerful model in order to investigate anomalous heat conduction phenomena. Following on to the work <cit.> let us consider an infinite harmonic chain. Each particle with mass m is connected to its neighbor by Hookean springs with stiffness C. The equation of motion of the particles reads:ü_k = ω_e^2( u_k-1 - 2 u_k + u_k+1), ω_e =√(C/m),where u_k is displacements of particle with index k. The following initial conditions are considered:u_k|_t = 0 = 0, u̇|_t=0 = σ(x) ρ_k,where ρ_k are independent random variables with zero expectation and unit variance; σ is the variance of the initial particle velocity. The variance is a slowly changing function of the spatial coordinate x = ka, where a is the initial distance between neighboring particles. Such initial conditions can be realized by ultrafast heating, for example with a laser <cit.>. Let us introducethe kinetic temperature T ask_BT = m⟨u̇_̇k̇⟩^2,where ⟨ ... ⟩ is an operator averaging over realizations, and k_B is the Boltzmann constant. In paper <cit.> a continuum partial differential equation for the kinetic temperature was obtained:T̈ + 1/tṪ = c^2 T”,where c is the speed of sound in a one-dimensional crystal. Eqn. (<ref>) describes the evolution of the spatial temperature distribution in the chain. The following initial conditions for the equation <cit.> corresponds to stochastic initial conditions (<ref>):Ṫ|_t=0 = 0,T|_t=0 = T_0(x).The solution of the initial problem(<ref>)–(<ref>) can be obtained inintegral form <cit.>:T(t,x) = 1/π∫_-t^t T_0(x - cτ)/√( t^2 - τ^2)dτ.Eqn. (<ref>)is a particular case of the Darboux equation <cit.>. This type of equation was investigated earlier in context with spherical averages for solutions of 2D and 3D wave equations. However, it was not investigated well in connection with the problems of heat conduction.Eqn. (<ref>) looks similar to the hyperbolic heat equation (<ref>), however, it has a variable coefficient. This peculiarity is due to anomalous heat transfer in a 1D chain. From the form of Eqn. (<ref>) it seems that it has a singularity. However, it does not matter because Eqn. (<ref>) is to be solved together with the initial conditions (<ref>). The absence of singularity is confirmed by the general analytical solution (<ref>) and solutions of particular problems that will be considered below. This work is dedicated to finding exact analytical solutions of Eqn. (<ref>) for cases when the initial thermal distribution T_0(x) is a localized function of coordinate x,T_0(x) = 0,x < -l,(x),-l< x <l, 0,x > l,where (x) is an arbitrary function, and l is the half width of the localized perturbation. Experimentally such an initial temperature distribution (<ref>)can be realized by performing superfast laser heating of a localized region of the chain. § RECTANGULAR PERTURBATION §.§ SolutionLet us consider the case when the initial temperature perturbation has a rectangular shape:T_0(x) = A( ℋ(x+l ) - ℋ(x-l) ),where ℋ(x) is the Heaviside function:ℋ(x) = 0,x < 0, 1,x ≥ 0.A is the amplitude of the temperature perturbation. After substituting formula (<ref>) into the solution (<ref>) we obtain:T(t,x) =A/π∫_-t^t ℋ(x+l)/√( t^2 - τ^2)dτ-A/π∫_-t^t ℋ(x-l)/√( t^2 - τ^2)dτ.By substituting the solution for a single Heaviside initial impulse, which was obtained in <cit.>,T(x, t)= T_S(x, t)= 0, x ≤ - ct, Aπarccos(xct)- ct ≤ x ≤ ct, A, x ≥ ctto(<ref>) we obtain the solution of the given problem as a linear combination of these solutions. Solution for positive x: t ≤τ_0:T(x, t) =0,l + ct ≤ x, A/πarccos(x-l/ct),l - ct ≤ x ≤ l + ct, A, 0 ≤ x ≤ l -ct,t ≥τ_0:T(x, t) = 0,ct+l ≤ x, A/πarccos(x-l/ct),ct - l ≤ x ≤ ct + l, A/π( -arccos(x+l/ct) + arccosx-l/ct), 0 ≤ x ≤ ct -l,where τ_0 = l/c. For negative x the solution is symmetric and can be obtained by T(x, t) = T(-x,t). For comparison let us consider the same initial problem for the classical heat equation:Ṫ = β T”.The solution for an initial Heaviside step temperature perturbation has the form <cit.>T(x,t) = 1/2( x/√( 4 β t)),where (x) is the Gaussian error function. Then solution of the initial problem (<ref>), (<ref>), (<ref>) is:T(x,t) = 1/2( x+l/√( 4 β t)) - 1/2( x-l/√( 4 β t)). The time evolution plots for the solution of anomalous heat equation (<ref>) and the Fourier equation (<ref>) are shown in Fig. <ref>. Let us compare the two solutions. The Fourier solution is forming a peak at x=0 which decays exponentially. For the case of the anomalous heat equation the solution decays in the area near x=0 more rapidly than near the wavefronts forming two peaks. The peaks travel in negative and positive directions with coordinates x = -l + ct and x = l - ct. §.§ Decay behaviorLet us consider the decay behavior of the solution (<ref>) atx = 0. We perform a series expansion of the solution,T(t, 0) = A/π[ π - 2arccos( l/ct) ] =2ε + O( ε^3),where ε = l/ct is a small parameter.Now let us consider the decay behavior of the peaks x = l -ct and x = -l + ct. From formula (<ref>) it follows that: T(t, -l + ct) = T(t, l -ct)= A/π[ π - arccos( 2l/ct - 1) ] = 2 √(ε) +O( ε^3/2), Summarizing the above:T(t, 0)t →∞∼ 2ε∼1/t ,T(t, -l + ct) = T(t, l - ct) t →∞∼ 2 √(ε)∼1/√(t).Thus, the solution decays faster in the area between wavefronts (proportional to 1/t) rather than near the wavefront (proportional to 1/ √(t)). Thus the peaks remain strongly pronounced even for long times. §.§ Envelope curve for the peaksThe solution (<ref>) has two peaks. The peaks travel in positive and negative directions at speed c. Since the solution is symmetric let us consider only the peak with the coordinate x = ct-l.We shall consider the curve drawn by the peak of the solution as it travels in positive direction. By substituting t = x + l/c into formula (<ref>)we obtain the expression for the enveloping curve:T_env(x)= A/π[ π - arccos( 2l/x+l - 1) ].For any x we have: T(x) ≤ T_env( |x| ).The enveloping curve is shown in Fig.<ref>. The expression decays as 1/√(x), which corresponds to the fact that the solution decays as 1/ √(t) near the wavefront (the wavefront travels at constant speed). §.§ Asymptotic behavior of the wavefrontLet us consider the solution (<ref>) near the wavefront at long times t.Let ξ = -x + ct/l. For x ∈ [-l+ct; l+ct] we have:T(ξ, t) = A/πarccos[ 1 - l/ct( ξ + 1) ] t →∞∼A/π√(2l/ct)√(ξ + 1),where ξ + 1ct is a small parameter, used for expansion. For x ∈ [l-ct; -l+ct] we have:T(ξ, t) =A/π( - arccos[ 1 - l/ct (ξ -1) ] + arccos[ 1- l/ct(ξ+1)] ) t →∞∼t →∞∼ A/π√(2l/ct)( √(ξ+1) - √(ξ-1)).The functions (<ref>) and (<ref>) have the following structure:T = Aπ√(2lct) F(ξ).The plot of the solution (<ref>), expressions(<ref>) and(<ref>) with the corresponding dimensionless time parameter t/τ_0 = 100 is shown in Fig. <ref>. The relation (<ref>) means that the shape of the solution shrinks vertically with time, but it does not change horizontally. The asymptotic solutions (<ref>) and (<ref>) give peak values of T for ξ=1, where F(ξ) = √(2). Thus the solution at this point is continuous, but not smooth (the derivative of the solution has a jump).§ TRIANGULAR PERTURBATIONIn order to obtain the solution for a triangular initial function we consider the following auxiliary problem where the initial temperature distribution is a linearly heated semispace,T_0(x) = 0 ,x < 0 ; Bx,x ≥ 0,where B = A/l is a constant of proportionality. After substituting (<ref>) in (<ref>) we obtain the solution for |x| < ct,T(x,t) = Bx( 1/πarcsin x + 1/2) + B/π√(t^2 c^2 - x^2)= f(x),and for |x| > ct the initial temperature distribution is preserved.Now we consider the problem for a triangular initial heat perturbation, which can be expressed by the following piecewise function:T_0(x) = 0,x< -l , (x+l)B,-l ≤ x < 0, (-x+l)B,0 ≤ x <l, 0,x ≥ l.The solution for the initial temperature distribution (<ref>) will be a linear combination of solutions for the linearly heated semispace. Denote the solution (<ref>) by T_L. The solution for the initial distribution (<ref>) will then be as follows:T(t,x) = T_L(t,x - l) + T_L(t, x+l)- 2T_L(t, x),the solution is symmetric, T(x,t) = T(-x,t). The part corresponding to positive x has the following piecewise form: t ≤τ_0/2 : T(t,x) = f(x+l) - 2f(x),     0 ≤x ≤ ct, (-x+l)B,     ct ≤ x ≤ l - ct, (-x+l)B+ f(x+l),     l - ct ≤ x ≤ l + ct, 0,   l + ct ≤ x,τ_0/2 ≤ t≤τ_0:T(t,x) = (x+l)B - 2f(x), 0 ≤ x ≤ l- ct, (x+l)B + f(x-l) - 2f(x),l-ct ≤ x ≤ ct, (-x+l)B+ f(x-l),ct ≤ x ≤ l + ct, 0,l + ct ≤ x,t ≥τ_0: T(t,x) = f(x+l) + f(x-l) - 2f(x), 0 ≤ x ≤ -l + ct, (x+l)B + f(x-l) - 2f(x),- l + ct ≤ x ≤ ct, (-x+l)B + f(x-l),ct ≤ x ≤ l + ct, 0, l + ct ≤ x.The plot of the solution for the triangular initial perturbation is shown in Fig. <ref>. Unlike the solution for a rectangular initial perturbation, which has a wavefront with vertical tangent and infinite derivative and a break of the temperature profile at the peak, the solution for a triangular perturbation has a smooth beginning at the wavefront and smooth behavior at the peak.§ SAWTOOTH PERTURBATIONWe consider an initial heat perturbation as a sawtooth spatial temperature distribution. It can be written in the following form:T_0(x) = 0, x ≤ -l , x+l, -l ≤ x < 0, 0, 0 ≤ x.The initial conditions (<ref>) can be written as linear combinations of step function and linearly heated semispace. Then the solution for a sawtooth initial perturbation (<ref>) can be obtained from the corresponding combination of the solutions for a step initial distribution T_S(x,t) and a linearly heated semispace initial distribution T_L(x,t):T(t,x) = T_L(x+l,t) + T_L(x,t)+T_S(x,t),it has the following piecewise form:t ≤τ_0:T(x, t)= 0, x ≤ - l - ct, f (x+l), -l - ct ≤ x ≤ -l + ct, B x,-l + ct ≤ x ≤-ct, Bx - f(x) - A/πarccos(x/ct), -ct ≤ x ≤ ct, 0,x > ct,t ≥τ_0:T(x, t) = 0,x ≤ -ct - l , f (x+l) - ct -l ≤ x ≤ -ct, f (x+l) - f(x) - A/πarccos(x/ct), - ct ≤ x ≤ ct - l , B x - f(x) - A/πarccos(x/ct),ct -l ≤ x ≤ ct, 0, ct ≤ x.The plot of the solution is shown in Fig. <ref>. The left wavefront has a smooth beginning and an infinite derivative at the peak. On the other hand, the right wavefront has an infinite derivative and vertical tangent at the beginning, smooth behavior at the peak, and a horizontal tangent and zero derivative at the peak. § CONCLUSIONSThe process of heat transfer in a 1D infinite harmonic chain was investigated. Localized initial perturbations were considered. Solutions for an equation describing anomalous heat conduction (<ref>) derived in <cit.> were obtained. Exact analytical solutions for rectangular, triangular, and sawtooth initial impulses were considered. It was shown that solutions for (<ref>) unlike solutions for classical heat equation have a strongly pronounced wavefront. For the rectangular case it was shown that the decay of the solution near the wavefront is proportional to 1/√(t). Near zero the decay is proportional to 1/t. Thus the solution decays slower near the wavefront, leaving clearly observable peaks. The shape of the wavefront is described by a function inversely proportional to the square root of time and has the form T = Aπ √(2lct) F(ξ). The solution for a triangular initial temperature perturbations has a smooth beginning at the wavefront and a smooth behavior at the peak. In case of a sawtooth initial perturbation we have a non-symmetrical solution. The left wavefront has a smooth beginning and an infinite derivative and vertical tangent at the peak; the right wavefront has an infinite derivative and vertical tangent at the beginning, smooth behavior at the peak, zero derivative and a horizontal tangent at the peak.The obtained solutions demonstrate the wave behavior accompanied with power decay. This differs from the results obtained from the solutions of the classic heat equation (<ref>) (diffusive behavior, exponential decay) and the hyperbolic heat equation (<ref>) (wave behavior, exponential decay). Such properties of the obtained solutions can be applied for analysis of the experimental data and choosing the right model for the description of the heat processes. § ACKNOWLEDGEMENTThe work was supported by Russian Science Foundation [Grant No. 14-11-00599].
http://arxiv.org/abs/1702.07855v1
{ "authors": [ "Aleksei A. Sokolov", "Anton M. Krivtsov", "Wolfgang H. Müller" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20170225085556", "title": "Localized heat perturbation in harmonic 1D crystals. Solutions for an equation of anomalous heat conduction" }
1/2 i d r p #1#1 #1(#1) #1(#1) #1|#1| #1|#1| #1#2∂#1/∂#2 #1#2#1/#2 #1#2(#1#2) ^⊤ #1#2#1/#2 #1#2(#1#2) #1#2#3#4(#1 #2_ #3 #4 )#1#2#3#4#1 #2, #3 (#4) #1 #1(<ref>) #1#1 #1#1
http://arxiv.org/abs/1702.08570v1
{ "authors": [ "George S. Pogosyan", "Cristina Salto-Alegre", "Kurt Bernardo Wolf", "Alexander Yakhno" ], "categories": [ "math-ph", "math.MP" ], "primary_category": "math-ph", "published": "20170227223739", "title": "Quantum superintegrable Zernike system" }
^1 Institute for Quantum Science and Engineering, Texas A&M University, College Station, Texas 77843, USA^2 Max Planck Institute for the Physics of Complex Systems, D-01187 Dresden, Germany Photosynthesis is the basic process used by plantsto convert light energyin reaction centers into chemical energy. The high efficiency of this process isnot yet understood today.Using the formalism for the description of open quantum systems by means of a non-Hermitian Hamilton operator,we consider initially the interplay of gain (acceptor) andloss (donor). Near singular points it causes fluctuations of the cross section which appearwithout any excitation ofinternal degrees offreedom of the system. This process occurs therefore very quickly and with high efficiency. We then consider the excitation ofresonance states of the system by means of these fluctuations.This second step of the whole process takes place much slower than the first one, becauseit involves the excitation of internal degrees of freedom of the system.The two-step process as a whole is highly efficient and the decay is bi-exponential. We provide, if possible, the results ofanalytical studies, otherwisecharacteristic numerical results. The similarities of the obtained results to light harvesting inphotosynthetic organisms are discussed. Gain and lossin open quantum systemsHichem Eleuch^1[email: hichemeleuch@tamu.edu] andIngrid Rotter^2[email: rotter@pks.mpg.de, corresponding author] December 30, 2023 ======================================================================================================================= § INTRODUCTION Photosynthetic organisms capture visible lightin their light-harvesting complex and transfer the excitation energy to the reaction center which storesthe energy from the photon inchemical bonds. This process occurs with nearly perfectefficiency. The primary processoccurring in the light-harvesting complex,is the exciton transfer between acceptor and donor, while the transfer of the energy to the reaction center appears as a secondary process. Both processes are nothing but two parts of the totallight harvesting. A few years ago, evidence of coherent quantum energy transfer has been found experimentally <cit.>. Recent experimental results <cit.> demonstrated that photosynthetic bio-complexes exhibit collective quantum coherence during primary exciton transfer processesthat occur on the time scale of some hundreds of femtoseconds. Furthermore, the coherence in such a system exhibits a bi-exponential decay consisting of a slow component with alifetime of hundreds of femtoseconds and a rapid component with a lifetime of tens of femtoseconds <cit.>. The long-lived components arecorrelated with intramolecularmodes within the reaction center, as shown experimentally<cit.>.These resultsinduced different theoretical considerations which are related to the role of quantum coherence inthe photosynthesis. For example, the equivalence of quantum and classical coherence in electronic energy transfer is considered in <cit.>. In <cit.>, the fundamental role of noise-assisted transport is investigated. In <cit.>, it is shown that the efficiency is increased by reducing radiative recombination due to quantum coherence. The Hamiltonian of the system in these (and many other)papers is assumed to be Hermitian although photosynthesis is a process that occurs in an open quantum system.We mention here also the paper <cit.> on the dynamical theory of primary processes of charge separation in the photosynthetic reaction center. The emphasis in this paper is on theimportant role of the primary processes, in which light energy is converted into energy being necessary for the living organisms to work. The lifetime of the primarily excited state must be very short. Otherwise there is no chance for the reaction center to catch the energy received from the photosynthetic excitation which willchange, instead, to heat and fluorescence (in the framework ofHermitian quantum physics).In the description of an open quantum system by means of a non-Hermitian Hamilton operator, the localized part of the system is embedded into an environment. Mostly, the environment is theextended continuumof scattering wavefunctions, see e.g. the review <cit.>. Coherence is an important ingredient of this formalism. Meanwhile the non-Hermitian formalism is applied successfully to the description of different realistic open quantum systems, see the recent review <cit.>. The paper <cit.> is one of the oldest references in which theresonance structure of the cross section in the regime of overlapping resonances is considered in the non-Hermitian formalism. In this paper, the resonance structure of the nuclear reaction ^15N+p with two open decay channels is traced as a function of the degree of overlapping of the individualresonances by keeping constant the coupling strength betweenthe localized part of the system and the environment of scattering wavefunctions. The distance between the energies of the individual resonance states is varied by hand. As a result, two short-lived states are formed at a critical value of the degree of overlapping. The widths of all the other states are reduced because ∑_n=1^N Γ_n has to be constant according to the constant coupling strength between system and environment. These states are calledtrapped states. In some following papers, this phenomenon is studied as a function of the coupling strength between system and environment and is called segregation of decay widths, see the recent review<cit.>. In these papers, the short-living states are calledsuperradiant states which exist together with long-living subradiant states. This formalism is applied also to the problem of energy transfer inphotosynthetic complexes <cit.>, see also<cit.>. In this formalism, the enhancement of the energytransfer is related to the existence of the superradiant state.In other papers, the resonance trappingphenomenonis related tosingular points which exist in theformalism of non-Hermitian quantum physics, see the review <cit.> and the recent paper <cit.>. These singular points are known in mathematics since many years <cit.>, and are called usuallyexceptional points (EPs).Most interesting new features caused by the EPs in the non-Hermitian physicsof open quantum systems are,firstly, the non-rigid phases of the eigenfunctions and,secondly, the possibility of an external mixing (EM) of the states of the localized part of the system via the environment. Non-rigidity of the phases of the eigenfunctions of the Hamiltonian and an EM of the statesare possibleonly in an open quantum system. They are not involved explicitly in any type of Hermitian quantum physics.Furthermore, superradiant and subradiant states do not appear in this formalism. Quite the contrary, phenomena that are related in, e.g., <cit.> to their existence,are an expression for nothing but the nontrivial propertiesof the eigenfunctions of a non-Hermitian Hamilton operator, such as non-rigid phases and EM of the wavefunctions.In <cit.>, the dynamics of the system and the efficiency of energy transfer are studied in a non-Hermitian formalismby taking into accountnoise acting between donor and acceptor, while in <cit.>, the role of protein fluctuation correlations in the energy transfer is investigated and the spin-echo approach is extended to include bio-complexes for which the interaction with dynamical noise is strong.It is the aim of the present paper to provide the general formalism of non-Hermitian physics of open quantum systems <cit.> by inclusion of gain which simulates the acceptor, as well as ofloss whichstands for the donor <cit.>.When additionallythe coupling of the system to asink is taken into account, this formalism can be applied to the description of light-harvesting of photosynthetic complexes. We underline that this formalism describesthe process of photosynthesis as a whole, i.e. as auniform process. While the first part occursinstantly, the second part of the process may lastlonger. The formalismis generic. In the future, it has to be appliedto concrete systems with realistic parameters.In Sect. <ref>, we sketch the formalism for the study of an open quantum system with gain and loss which is basicfor the description of photosynthesis. In Sect. <ref>, we include additionally a sink into the formalism simulated by coupling to a second environment.In both sections we provide analytical as well as numerical results. We discuss and summarize the results in Sect. <ref> and draw some conclusions inSect. <ref>. Before providing the formalism for the description of open quantum systems, it is necessary to clarify the meaningof some terms. We will use definitions similar to those used in nuclear physics. * In nuclear physics, channeldenotes the coupling of a certain state of the nucleus A to its decay products after emission of particle a andleaving the residual nucleus (A-a)in a special state.The term channel is equivalent toembedding of a localized stateof the system into an environment. The localized state in nuclear physics is the state of the nucleus A, while the environment is thethe continuum of scattering wavefunctions of the particle a.* In difference to the definition of energy and width of a nuclear statein nuclear physics, we use the definition ε_k = e_k+i/2γ_k for the complex eigenvalues of the non-HermitianHamilton operator H. The widths of decaying states have thus a negative sign <cit.>.* The term internalmixing of the wavefunctions denotes the direct interaction between two orthogonal eigenfunctions of a Hermitian Hamilton operator,⟨Φ_i|V|Φ_j i⟩. In our calculations, it is supposed to be included in the energies e_k and widths γ_k of the states that define the non-Hermitian Hamilton matrix, see e.g. Eq. (<ref>).Anexternal mixing of two eigenstates of a non-Hermitian Hamilton operator occurs via the environment and is thus a second-order process. It is defined only in an open system.* The singularity related to the coalescence <cit.> of two eigenvalues of a non-Hermitian Hamilton operator His called, in recent literature, mostlyexceptional point. In older papers, the equivalent expressionsbranch point in the complex plane ordouble pole of the S-matrix are mostly used. § OPEN QUANTUMSYSTEMS WITH GAIN AND LOSS §.§ HamiltonianWe sketch the features characteristic of an open quantum system with gain and loss <cit.> by considering a localized 2-level systemthat is embedded in a common continuum of scattering wavefunctions.One of these two states gains particles from the environment by interacting with it, while the other oneloses particles to the continuum by decay. For the description of the open quantum system,we use the non-Hermitian Hamilton operator <cit.>H̃^(2,1) =( [ ε_1^(1)≡ e_1^(1) + i/2γ_1^(1) ω^(1); ω^(1) ε_2^(1)≡ e_2 + i/2γ_2^(1); ]) .Here, ε_i^(1) are the two complex eigenvalues of the basicnon-Hermitian operator coupled to the environment 1 (called also channel 1) <cit.>. The e_i^(1) are theenergies of the states i and the γ_i^(1) are their widths. One of these eigenvaluesdescribesloss characteristic of decaying states(γ_2^(1) < 0) while the other onedescribesgain from the environment (γ_1^(1) > 0)<cit.>.The ω^(1) stand for the coupling matrix elements of the two states via the common environment 1. They are complex <cit.>. The complex eigenvalues E_i^(1)≡E_i^(1) + 1/2Γ_i^(1) of H^(2,1) give the energiesE_i^(1) and widths Γ_i^(1) of the states of the localized part of the system <cit.>.We will consider also the non-Hermitian Hamilton operatorH̃_0^(2,1) =( [ ε_1^(1)≡ e_1^(1) + i/2γ_1^(1) 0; 0 ε_2^(1)≡ e_2^(1) + i/2γ_2^(1); ])which describes the localized part of the open system without coupling of its states via the continuum (ω^(1)=0).The phases of the eigenfunctions Φ^0_i ofH̃_0^(2,1) are rigid (like in Hermitian quantum physics) when γ_1^(1) = - γ_2^(1). §.§ Eigenvalues The eigenvalues of ^(2,1) are, generally, complex and may be expressed as_1,2^(1)≡ E_1,2^(1) + i/2Γ_1,2^(1) =ε_1^(1) + ε_2^(1)/2± Z  ;Z ≡1/2√((ε_1^(1) - ε_2^(1))^2 + 4 (ω^(1))^2)whereE_i^(1) and Γ_i^(1) stand for the energy and width, respectively, of the eigenstate i. Also here Γ_i^(1)≤ 0 for decaying states and Γ_i^(1)≥ 0 forgaining states <cit.>. The two states may repel each other in accordance with Re(Z), or they may undergo width bifurcation in accordance with Im(Z). When Z=0 the two states cross each other at a point thatis called usuallyexceptional point (EP) <cit.>. The EP is a singular point (branch point) in the complex plane where the S-matrix hasa double pole <cit.>. According to its definition<cit.>,the EP is meaningful in an open quantum system which is embedded inone common environment c=1. Correspondingly, we denote e.g. the eigenvalues by _i^(1).We consider now the behaviorof the eigenvalueswhen theparametrical detuning of the two eigenstates of ^(2,1) is varied,bringing them towards coalescence <cit.>.According to (<ref>), thecondition for coalescence reads Z = 1/2√((e_1^(1)-e_2^(1))^2 -1/4 (γ_1^(1)-γ_2^(1))^2+i(e_1^(1)-e_2^(1))(γ_1^(1)-γ_2^(1)) + 4(ω^(1))^2) =  0 .We considertwocases that can be solved analytically.(i) When e_1^(1) = e_2^(1), andω^(1)is real, it followsfrom (<ref>) the condition1/4(γ_1^(1) - γ_2^(1))^2 =4(ω^(1))^2   →  γ_1^(1) - γ_2^(1) =±4 ω^(1)for the coalescence of the two eigenvalues, i.e. for an EP.It follows furthermore(γ_1^(1) - γ_2^(1))^2 <16(ω^(1))^2 →  Z  ∈ (γ_1^(1) - γ_2^(1))^2 >16(ω^(1))^2 →  Z  ∈ .Eq. (<ref>) describes the behavior of the eigenvaluesaway from the EP, where theeigenvalues E_k^(1)differ from the original onesthrough only a contribution to the energy.The widths in contrast, remain unchanged, and this situation therefore corresponds to that of level repulsion. Eq. (<ref>), in contrast, is relevantat the other side of the EP. Here,the resonance statesundergo width bifurcationaccording to Im(Z) 0. The bifurcation starts in the neighborhood of the EP. Physically, the bifurcation implies thatdifferent time scales may appear in the system, while the states are nearby in energy.(ii)When e_1^(1) = - e_2^(1) 0, andω^(1) is imaginary, then the condition(2e^(1))^2 =4(ω^(1))^2   →   2e^(1) =±2 ω^(1),together with γ_1^(1) = γ_2^(1), follows for the coalescence of the two eigenvalues from (<ref>). Here 2e^(1)≡ e_1^(1) - e_2^(1).Instead of (<ref>) and(<ref>) we have(2e)^2 > 4 (ω^(1))^2→  Z  ∈ (2e)^2 < 4(ω^(1))^2 →  Z  ∈ .Thus, the EP causes width bifurcation also in this case. However,this case is realized only when γ_1^(1) = γ_2^(1) = 0 at the EP, i.e. when gain and loss vanish at the EP. §.§ Eigenfunctions The eigenfunctions of a non-Hermitian Hamilton operator are biorthogonal (for details see <cit.>)H |Φ_i⟩ = E_i|Φ_i⟩⟨Ψ_i| H =E_i ⟨Ψ_i|.In the case of the symmetric 2× 2 Hamiltonian (<ref>), it is Ψ_i = Φ_i^*and the eigenfunctionsshould be normalized according to ⟨Φ_i^*|Φ_j⟩ = δ_ijin order to smoothly describe the transition from a closed systemwith discrete states to a weakly open one with narrow resonance states. As a consequence of (<ref>), the values of thestandard expressionsare changed,⟨Φ_i|Φ_i⟩=Re (⟨Φ_i|Φ_i⟩)  ;A_i ≡⟨Φ_i|Φ_i⟩≥ 1 ⟨Φ_i|Φ_j i⟩=i   Im (⟨Φ_i|Φ_ji⟩) = -⟨Φ_ji|Φ_i⟩ |B_i^j|≡ |⟨Φ_i | Φ_ji|  ≥ 0 .Furthermore, the phase rigidity which is a quantitative measure for the biorthogonalityof the eigenfunctions, r_k  ≡ ⟨Φ_k^* | Φ_k ⟩/⟨Φ_k| Φ_k ⟩ =  A_k^-1,issmaller than 1. Far from an EP,r_k ≈ 1 while it approaches the valuer_k =0 when an EP is approached.The Hamiltonian (<ref>) describes the system around the EP without any mixing of its states via the environment, sinceω^(1) =0corresponds to vanishing EM of the eigenstates. In order to determinequantitatively the strength of the EM,we present the eigenfunctions Φ_i of H̃^(2,1) in the set of eigenfunctions{Φ_i^0} of H̃_0^(2,1),Φ_i=∑b_ij Φ_j^0 ;b_ij = ⟨Φ_j^0 * | Φ_i⟩,under the condition that the b_ij are normalized by∑_j (b_ij)^2 = 1. The coefficients |b_ij|^2differ from the (b_ij)^2. They contain the information on the strength of EM via the environment which isdetermined by the value of ω^(1).For illustration, we consider the EM of the wavefunctions Φ_1 and Φ_2 around an EP in the two cases discussed in Sect. <ref>. (i) e_1^(1) = e_2^(1), andω^(1)∈: according to (<ref>), the strength of the EM via the environment is determined by the differences|γ_1^(1)|- |γ_2^(1)| of the widths (which both have different sign). It depends thus on the fluctuations of theγ_i^(1). (ii)e_1^(1) = - e_2^(1), and ω^(1)∈: according to(<ref>) the strength of the EM is related to the differences|e_1^(1)|- |e_2^(1)| of the energies, i.e. to the fluctuations of the e_i^(1). This case is howeverrealizedonly when gain and loss vanish at the EP(i.e. γ_1^(1) = γ_2^(1) = 0 at the EP). At the EPs, the two corresponding eigenfunctions are not orthogonal.InsteadΦ_1^ cr→ ± i Φ_2^ cr; Φ_2^ cr→ ∓ i Φ_1^ craccording to analytical and numerical results<cit.>.We underline once more that an EP is, according to its definition, related to the commonenvironment in which the system is embedded. In other words,it is well defined under the condition that the system isembedded inonly one continuum. §.§ Schrödinger equation with source term The Schrödinger equation (H̃^(2,1) - E_i^(1) |Φ_i^(1)⟩ =0 may be rewritten into a Schrödinger equation with source term <cit.>,(H̃^(2,1)_0-E_i^(1))  | Φ_i^(1)⟩ = -( [ 0 ω^(1); ω^(1) 0 ]) |Φ_i^(1)⟩.In this representation,the coupling ω^(1) of thestatesi and j iof the localized system via the common environment of scattering wavefunctions (EM) is contained solely in the source term. The source term vanishes, when e_1^(1) = e_2^(1) around the EP under the condition γ_1^(1) =γ_2^(1) according to (<ref>), what is fulfilled when γ_i=1,2^(1) = 0 .Far from EPs, the coupling of the localized system to the environment influences the spectroscopic properties of the system,in general, only marginally <cit.>. The influence is howevernon-vanishing also in this case, see e.g. the experimental results<cit.>.In the neighborhood of EPs, however, the coupling betweensystem and environment and therewith the source termplay an important role for the dynamics of the open quantum system. The reason is, according to mathematicalstudies, that the source termcausesnonlinear effects in the Schrödinger equation (<ref>) aroundan EP.For details see <cit.>.§.§ Resonance structure of the S-matrixLet us consider theresonance part of the S matrix from which the resonance structureof the cross section can be calculated, σ (E) ∝ |1-S(E)|^2 .A unitary representation of the resonance part of theS matrix in the case of two resonance states coupled to acommon continuum of scattering wavefunctions reads <cit.> S = (E-E_1-i/2Γ_1) (E-E_2-i/2Γ_2)/(E-E_1+ i/2Γ_1) (E-E_2+i/2Γ_2).Here, the influence of the EPs onto the cross section is containedin the eigenvaluesE_i = E_i + i/2 Γ_i. The expression (<ref>) allows therefore to receivereliable resultsalso when the phaserigidity is reduced, r_k < 1.Let us assume real ω^(1) and γ_1^(1) = - γ_2^(1).First we consider the case corresponding to the condition(<ref>), i.e. for large coupling strength ω^(1)of thesystem to the environment of continuous scattering wavefunctions. In this case Γ_1 = - Γ_2 =0 and, accordingto (<ref>),S = (E-E_1) (E-E_2)/(E-E_1) (E-E_2) =1 .In the other case,(<ref>), it isE_1=E_2;Γ_1=-Γ_20; and S = (E-E_1-i/2Γ_1) (E-E_1+i/2Γ_1)/(E-E_1+ i/2Γ_1) (E-E_1-i/2Γ_1) =1. In both cases, S=1, i.e. σ(E) =0 according to (<ref>). This result corresponds to the well-known fact that EPs cannot be identified in the resonance structure of the S-matrix and thereforealso not in the resonance structure of the cross section. Most important is however the result that no resonances will beexcited due to γ_1^(1) = - γ_2^(1).The result S=1 is violatedwhen the conditionsω^(1)∈and γ_1^(1) = - γ_2^(1) are not exactlyfulfilled. This may happen, e.g., under the influence of externalrandom (stochastic) processes that cause fluctuations of the γ_i^(1). In such a case, S<1; and the energy (or information) will be transferred with an efficiency of nearly 100 % (because no resonances can be excited under this condition in the localized part of the system). Results for this case can be obtained only numerically. §.§ Numerical results: one-channel case §.§.§ Merge of states with gain and lossLet us first consider the results that are obtained by using a parametric dependence of the energies e_i^(1) and widths γ_i^(1) of the states of the localized part of the system which is analog to that used in <cit.> for a decaying system.The results are shown in Fig. <ref>.In this figure, the existence of an EP at the parameter valuea=a^ cr = 2/3 can clearly be seen. Here, the two states are exchanged (Figs. <ref>.a,b),the phase rigidity r_i approaches the value 0 (Figs. <ref>.c, d) and the EMof the states via the continuum increases limitless (Fig. <ref>.e). The contour plot of the cross section (Fig. <ref>.f) shows the wavefunctions of the two states: while the eigenvalues ofH̃^(2,1) cannot be seen according to (<ref>) and(<ref>), the eigenfunctions show some fluctuating behavior around the positions of the eigenstates according to the finite (nonvanishing) range of their influence (see Figs. <ref>.c,d,e). These fluctuations of the eigenfunctions can be seen in the contour plot. Although they follow the positions of the eigenvalues, their nature is completely different from that of the eigenvalue trajectories. The eigenfunction trajectories in Fig. <ref>.f show the exchange of the two states at the EP. That means: the state with positive widthturns into a state with negative width and vice versa. This underlines once more that the two trajectories shown in Fig. <ref>.fhave really nothing in common withtheeigenvalue trajectories of resonance states the widths of which are always negative (or zero at most). We underline once more that the results shown in Fig. <ref> are formally similar to those obtained and discussed in <cit.> for a decaying system. In the latter case,both states which are exchanged at the EP,are of the same type: they are resonance states with negative widths. It is interesting to see from the numerical results (Fig. <ref>),that the non-Hermitian Hamilton operator H can be used, indeed,for the description of these two differenttypes ofopen quantum systems as suggested in Sect. <ref><cit.>. Additionally we show some numerical results for the case that,respectively, the distance in energy of the two statesis smaller (Fig. <ref>) and the EM as well as the widths|γ_i^(1)| differ morefrom one another (Fig. <ref>) than in Fig. <ref>. The eigenvalue and eigenfunction figures(Figs. <ref>.a-e, <ref>.a-e,  <ref>.a-e) are similar to one another and show clearly the signatures of an EP at a certain critical value of the parameter a=a_ cr. The contour plots (Fig. <ref>.f,  <ref>.f, <ref>.f) differ howeverfrom one another. When the states are nearer to one another in energy, the two states with negative and positive width merge (Fig. <ref>.f).Under the influence of stronger EM (strongercoupling strength ω^(1) between system and environment)as well as of a larger difference between the two values|γ_i^(1)|, the extension of the region with non-vanishingcross section is enlarged in relation to the energy (Fig. <ref>.f). In any case, the cross section vanishes arounda=a_ cr. Resonance states are not excited.§.§.§ Level repulsion of states with gain andloss More characteristic for an open quantum system with gain and loss than those in Sect. <ref> arethe analytical results given inSect. <ref>. According to these results, thecross section is zero whenγ_1^(1) = - γ_2^(1), e_1^(1) = e_2^(1), andω^(1)∈. Under the influence of an EP which causes differences between the original spectroscopic valuesε_i^(1)≡ e_i^(1) + i/2γ_i^(1) and the eigenvalues E_i^(1)≡ E_i^(1) + i/2Γ_i^(1) of ^(2,1),a non-vanishing cross section isexpectedwhen atleast one of the conditionsΓ_1^(1) = - Γ_2^(1), E_1^(1) = E_2^(1), together withω^(1)∈, is not fulfilled. In Fig. <ref> we show the correspondingnumerical resultsobtained for two neighboring states,e_1^(1)≈ e_2^(1), andω^(1) almost real. We fix the energiese_i^(1) and vary parametrically the widths γ_i^(1), see the dashed lines in Figs. <ref>.a,b. The results show an EP at a=-1.8 and the hint to another EP ata=1.8. At the EP, the phase rigidity approachesthe value zero,r_i → 0 (Fig. <ref>.c), and the EM of the states is extremely large, |b_ij| →∞ (Fig. <ref>.e).Of special interest is the parameter range between the two EPs. Here,r_i → 1 at a≈ 0(Figs. <ref>.c,d).At this parameter value, the level repulsion is maximum;and the two eigenfunctions of the non-Hermitian Hamiltonian^(2,1) are (almost) orthogonal.An analogous result is knownfrom calculations for decaying systems,i.e. for systems with excitation of resonance states <cit.>. In these calculations, the energies are varied parametrically and ω is almost imaginary (Fig. 1 in <cit.>). Therefore, the two eigenfunctions of the non-Hermitian Hamilton operator are (almost) orthogonal (r_i → 1)at maximum width bifurcation(instead of at maximum level repulsionin Fig. <ref>).In any case, the two eigenstates of the non-Hermitian operatorturn irreversibly into two states with rigid phases in spite of the non-Hermiticity of the Hamiltonian. This unexpected result occurs due to the evolution of the system to the point of, respectively, maximum level repulsion and maximum width bifurcation,which is driven exclusively by the nonlinearsource term of the Schrödinger equation, see Sect. <ref>. The eigenfunctions of these two states are mixed. In order to receive a better understanding of this result, we mention here another unexpected result of non-Hermitian quantumphysics, namely the fact that a non-Hermitian Hamilton operator may have real eigenvalues <cit.>. This fact is very well known in literature for a long time, for references see the review <cit.>. The corresponding states are called usuallybound states inthe continuum. Most interesting for a physical system is the contour plot of thecross section (Fig. <ref>.f). According to the analytical resultsdiscussed in Sect. <ref>, the cross section vanishes far from the parameter range that is influenced by an EP. It does however not vanishcompletely in the parameter range between the two EPs. Around the EP ata=-1.8, theconditions for vanishing cross section are quite well fulfilled, while this is not the case around the other EP at a = 1.8. In approaching the two EPs by increasing and decreasing, respectively, the value of a the cross section vanishes arounda=-1.8, and is non-vanishing around a=1.8. The cross section vanishesalso around the point of maximum level repulsionat which the two eigenfunctions of ^(2,1)are orthogonal (and not biorthogonal). Additionally, we performed calculations (not shown in the paper) withthe reduced value ω^(1)_red = 0.5(0.9+0.1i) inorder to determine the role of EM in the cross section picture. The obtained results are similar to those shown in Figs. <ref>.a-f.The parameter range influenced by the two EPs is however smaller when ω^(1) is reduced: it ranges froma≈ -1 toa≈ 1 when ω^(1) = ω^(1)_red. Accordingly, the region of the non-vanishing cross section in the contour plot shrinks in relation to a, and also in relation to the energy. In calculations with vanishing external mixing (ω^(1) =0), the cross section vanishes everywhere. § OPEN QUANTUM SYSTEM WITH GAIN AND LOSSCOUPLED TO TWO ENVIRONMENTS§.§ Hamiltonian for coupling to two environmentsLet us consider the 4× 4 non-Hermitian matrixH̃^(2,2) =( [ ε_1^(1) ω^(1) 0 0; ω^(1) ε_2^(1) 0 0; 0 0 ε_1^(2) ω^(2); 0 0 ω^(2) ε_2^(2); ]) .Here, ε_i^(1)≡ e_i^(1) + i/2γ_i^(1) andε_i^(2)≡ e_i^(2) + i/2γ_i^(2) are the complex eigenvalues of the basicnon-Hermitian operator H̃^(2,2) relative to channel c=1 and c=2, respectively <cit.>. The two channels (environments)are independent of and orthogonal to one another what is expressed by the zeros in the matrix(<ref>).One of the channels may be related togain and loss <cit.> (acceptor and donor) consideredin the previous section <ref>, while the other channel may simulate a sink. In this case, the twowidths γ_1^(1) and γ_2^(1) havedifferent sign relative to the first channel. Relative to the second channel however, both γ_i^(2) are negative according to a usual decay processof a resonance state. The ω^(1) and ω^(2)stand for the coupling matrix elements between the two states i=1,  2 of the localized part of the open quantum system andtheenvironment c=1 and 2, respectively. In the case considered above, these two environments are completely different from one another and should never be related to one another. In moredetail: an EM of the considered states may be caused onlyby ω^(1)or by ω^(2), and never by both values at the same time <cit.>. This is guaranteed when ω^(2)=0 what is fulfilled when there is only one state in the second channel. When there are more states, then|ω^(2)| should be much smaller than |ω^(1)| (here we point to the general result that the values ω are related to the widths γ_i of the states <cit.>).The values |γ_i^(1)| and |γ_i^(2)| are independent of one another and express the different time scales characteristic of the two channels.While the |γ_i^(1)| will be usually very large,the |γ_i^(2)| are generally much smaller.Accordingly, the two-step process as a whole will show, altogether, a bi-exponential decay: first the decay occurs due to the exponential quick process; somewhere at its tail it will however switch over into theexponential decay of the slow process. The Hamiltonianwhich describes vanishing coupling of the states of the localized part of the open quantum system to both environments isH̃_0^(2,2) =( [ ε_1^(1) 0 0 0; 0 ε_2^(1) 0 0; 0 0 ε_1^(2) 0; 0 0 0 ε_2^(2); ])by analogy to (<ref>). It does not contain any EMvia an environment.§.§ Eigenvalues and eigenfunctions of H̃^(2,2)The eigenvaluesE_i^(c)≡ E_i^(c) +i/2Γ_i^(c) and eigenfunctions Φ_i^(c)of (<ref>) are characterized by two numbers: the number iofthe state (i=1, 2) of the localized part of the system and the number cof the channel (c=1, 2), called environment, in which the system is embedded. Generally,E_i^(1) E_i^(2) andΓ_i^(1)Γ_i^(2). Also the wave functions Φ_i^(1) and Φ_i^(2)differ from one another due to the EM of theeigenstates via the environment c=1 and c=2, respectively. From a mathematical point of view, the system has thereforefour states.An EP influences the dynamics of theopen quantum system also in the two-channel case. Without an EP in the considered parameter range in relation to both channels, we have E_i^(1)≈ E_i^(2), Γ_i^(1)≈Γ_i^(2) andΦ_i^(1)≈Φ_i^(2). Accordingly, one has to consider effectively only two states Under the influence of an EP relative to c=1 (or/and relative to c=2), the eigenvalues and eigenfunctionswill be, however, different from one another, E_i^(1) E_i^(2),Γ_i^(1)Γ_i^(2) andΦ_i^(1)Φ_i^(2) in the corresponding parameter range. We have to consider therefore effectively four states in this case.According to <cit.> an EP is definedwhen the system isembedded inone common environment. Under this condition, itcauses nonlinear processes in a physical system, which is the crucial factor forthe dynamical properties ofan open quantum system <cit.>. This is valid not only for systems all states of which decay (corresponding to someloss), but also for systems with lossand gain, as shown in<cit.>.Due to the nonlinear processes occurring near to an EP, it is difficult to receive analytical solutions for the eigenvalues and eigenfunctions of (<ref>). We will provide the results of some numerical simulations, above all withe_1^(1)≈ e_2^(1), almost realω^(1) and almost imaginaryω^(2),which is the most interesting and general case for a system with gain and loss that is coupled to a sink (see the analytical results obtained withe_1^(1) = e_2^(1) and ω^(1)∈, Eq. (<ref>), and the correspondingresults for decaying systems in <cit.>). §.§ Schrödinger equation with source term and coupling to two environmentsUsing (<ref>), we can write down the Schrödinger equation withsource term for the two-channel case in analogy to(<ref>) for the one-channel case. The corresponding equation reads( H^(2,2)_0-E_i^(c))  | Φ_i^(c)⟩= - ( [ 0 ω^(1) 0 0; ω^(1) 0 0 0; 0 0 0 ω^(2); 0 0 ω^(2) 0 ]) |Φ_i^(c)⟩.The source term depends on the coupling of the system to both channels, i.e. onω^(1) andon ω^(2).We will consider thegeneralcase with two channels (two environments) in which |ω^(2)| ≪ |ω^(1)|. We repeat here that, according to their definition <cit.>, EPs occur only in the one-channel case, i.e. onlyin the submatrices related either to channel 1or to channel 2. They are not defined in the 4× 4 matrix(<ref>). However, eachEP in one of the two submatrices in (<ref>) influences the dynamics of the open two-channelsystem. This will be shown in the following section by means of numerical results for the case that there is an EP in thefirst channel which simulates acceptor anddonor (gain and loss),while the second channel being of standard type with resonance states, may or may not have an EP.§.§ Numerical results: two-channel case We performed some calculations for the two-channel case by starting from the calculations for the one-channel case in, respectively, Fig. <ref> and<ref> and by adding a second channel that describesdecaying states (corresponding to loss). There are, of course, very many possibilities for choosing the number of statesas well as the parameters for the second channel. One possibility is to keep the parameters constant by varying the parameter a of the first channel. Another possibility is to relate them directly to the parameter a, or to introduce another independent parameter b. The choice should correspond to the physical situation considered. The aim of our calculations is to illustrate the influence of a second channel onto the eigenvalues and eigenfunctions of H̃^(2,2) and onto the contour plot of the cross section. We exemplify this bychoosingparameter dependent values ε_i^(1) in the first channel andparameter independent valuesε_i^(2) in the second channel. In the following, we show a fewcharacteristic results. §.§.§ Merge of states with gain and loss; second channel with two states We start these calculations with two channels by choosingtwo merged states with gain and lossaccording to Fig. <ref>. The second channel contains two resonance states i=1,2 with negative widths and |γ_i^(2)| ≪ |γ_i^(1)|, seeFig. <ref>.The eigenvalue and eigenfunction pictures Fig. <ref>.a,b and c,d,e, respectively, show the eigenvalues and eigenfunctions of the first channel (Fig. <ref>.a-e) as well as the eigenvalues and eigenfunctionsof the second channel. The last ones are constant as function of theparameter a what follows from the assumption of their parameterindependence. The contour plots of the cross section(Figs. <ref>.f and <ref>.f) are different from one another.Common to both of themis that the cross section does not vanish in a finite range of theenergy around E ≈ 0 for all a.Under the influence of the two states in the second channel which areexactly in this energy and parameter range, the cross section issomewhat reduced. It does however not vanish. These (and similar) simulationsshow clearly the following result. The fluctuations of the cross section which are caused by the merging of two states with gain andlossin the first channel, excite resonance states in the second channel. This happens althoughthe nature of both channels is completely different. In the first channel,internal degrees of freedom of the system are not excited,while the appearance of the resonance states in the second channel occurs via excitation ofinternal degrees of freedom of the system. §.§.§ Level repulsion of states with gain and loss; second channel with one stateIn these calculations we start from the results shown in Fig. <ref> for the first channel. The second channel contains only one state. The eigenvalue and eigenfunction pictures Fig. <ref>.a-e contain the eigenvalues and eigenfunction trajectories of Fig. <ref>.a-e as well as those of the second channel, e.g. the energy trajectoriesat E_i = 0.5 and width trajectories at Γ_i /2 = -0.05. The phase rigidity approaches the value 1 at a=0 in both cases.The corresponding contour plot of the cross section is shown in Fig. <ref>.f. It is relatedto the contour plot of the first channel (Fig. <ref>.f);and shows additionally the parameter independent state of the second channel in the whole parameter range.We performed further calculations withparameters similar to those used in Fig. <ref>.a-e and show two of the corresponding contour plots inFigs. <ref>.g,h. Both contour plots are obtained with the comparably large width γ_1^(2) /2 = - 0.5; the two energies e_1^(2) are however different from one another. The difference betweene_1^(2)=0.5 ande_1^(2)=-0.5 can clearly seen in the corresponding contour plots Figs. <ref>.g and h.The results of these simulations with level repulsion in the first channel and one state in the second channel show the same characteristic features as those discussed above for merging states. The fluctuations observed in the first channel are able to excite resonance states in the second channel. §.§.§ Level repulsion of states with gain and loss; second channel with two states The situation with two states in the second channel is richer than that with only one state because the two states can mix via the common continuum.We showresults for one special case in Fig. <ref>. As in Figs.<ref>and <ref>, the eigenvalue figures<ref>.a,bcontain the eigenvalue trajectories of both channels. Theeigenfunction trajectories refer to the existence of an EP in thesecond channel: the phase rigidity of states related to the secondchannel is independent of the parameter a, as expected. It ishoweversmaller than 1. Also the mixing |b_ij|of the two wavefunctions via the common continuumin the second channel shows small deviations from the expectations.The relation of these results for the eigenfunctions of H^(2,2) to an EP in the second channel is discussed in detail in appendix <ref>. The contour plotFig. <ref>.f shows the same characteristic features asFig. <ref>.f. Both, its relation to the contour plot of the first channel (Fig. <ref>.f) as well tothe parameter independent state of the second channelcan clearly be seen. Thus, the fluctuations observed in the first channelexcite resonance states in the second channel also in this case.Summarizing the numerical results shown in the three figures<ref> to <ref> we state the following.Gain and loss in the first channel and excitation of resonance states in the second channel are auniform process. Thisprocess can be described as a whole inthe formalism for the description of open quantum systems whichis used in the present paper.§ DISCUSSION AND SUMMARY OF THE RESULTS In our paper, we considered gain and loss in an open quantum system <cit.>. Of special interest is the interplay of these two opposed processes in the neighborhood of singular points where it causes fluctuations of the cross section. These fluctuations areobservable and can excite resonance states in the system.The time scale of the fluctuations and that of the excitation ofresonance states are very different. The fluctuations of the cross section occur quickly, without any excitation of internal degrees of freedom of the system. The excitation of resonance states is however much slower, and internal degrees of freedom of the system are involved. The results of our calculations meet therefore the conditionthat the lifetimeof the primary process in thephotosynthetic reaction center has to be very short. Otherwise the energyreceived from the photosynthetic excitation, will change into heat and fluorescence <cit.>.This is a statement of Hermitian quantum physics, since the change of energy into heat and fluorescence is impossible innon-Hermitian quantum physics according to the results of our calculations. Here the photosynthesis does not excite any eigenstate of thenon-Hermitian Hamiltonian H. Instead, the primay process occurs due to fluctuations of the eigenfunctions of H around EPs. The mechanism of photosynthetic excitation in non-Hermitian quantum physics is therefore, as a matter of principle, different from that in Hermitian quantum physics.We sketched first the formalism by meansof which both the interplay between gain and loss in an open quantum system <cit.> and the excitation of resonance states can be describedas a uniform process.In any case, the widths γ_i of the states are (generally)different from zero. In the first case we have two states with different sign of the widths (corresponding to gain and loss), while the states in the second case are standard decaying (resonance) states with negative sign <cit.>. Generally, the cross section related to the two states with gain and loss vanishes. Deviations from this rule appear around the position of an eigenstate and in theneighborhood of singular (exceptional) points, where they may cause non-vanishing fluctuations of the cross section. These fluctuations have nothing in common withresonances. Rather, they are merely deviations from the vanishing valueof the cross section and arenot related to the excitation of any internal degrees of freedom of the system. They occur therefore with anefficency of nearly 100 % at a very short time scale.The fluctuations caused by the interplay of the states withgain and loss, are observable and mayexcite resonance states of the system after a comparably long time(which corresponds to the widths Γ_i of these states). The whole process of gain and loss together with the excitation of resonance states is therefore characterized bytwo very different time scales: the quick process which creates the fluctuations, and the slow process which is related to the excitation of resonance states. The decay occurs thus bi-exponential. Initially, it is determined by the quick process of the interplay between gain and loss. Somewhere at its tail however,it will switch over to the slow process with excitation of internaldegrees of freedom of the system.The states with gain and loss as well as the resonance states excited by the fluctuations, can each interact via a common environment into which the corresponding states are embedded. The two environments (channels)willnever mix.This request is guaranteed in our formalism due to the very differenttime scales of the two processes. In any case, this so-calledexternal mixing of the states occurs additionally to the direct so-calledinternal mixing of the states which is supposed in our calculations to be involved in thecomplex energies ε_i ≡ e_i + i γ_i /2of the states.According to the numerical results of our paper, thefluctuationsare very robust. They appear in a relatively large finite parameter range around the positions of the eigenstatesand around EPs.Finally, we mention a few interesting results which are characteristic of open quantum systems including those considered in the present paper.– The states of an open quantum system may interact via a common environment into which the system is embedded. This mixing is called usually external interaction. – The states of an open quantum system may have positive or negative widths. The states with positive width <cit.> gain excitons (or information) from the environment while those with negative width lose excitons(or information) due to their coupling to the environment. As function of a parameter, gain may pass into loss and vice versa <cit.>. – The phases of the eigenfunctions of a non-Hermitian operator are, generally, not rigid, see Figs. <ref> to <ref> and<cit.>. – Irreversible processesdetermine the evolutionof an open quantum system up to the occurrence of orthogonal eigenstatesat maximumwidth or level repulsion, see Figs. <ref>, <ref>,<ref> and <cit.>.We mention further that results similar to those discussed abovefor systems with two states,appear also in calculations for systems with more than two states,see <cit.>. This holds true also for systems with gain and loss.§ CONCLUSIONS In our paper, we provided some results for a two-step process which we obtained in the framework of the non-Hermitian formalism<cit.> for the description of open quantum systems. The first step is the interplay between gain and loss of information(excitons) from an environment, while the second step is theexcitation of a resonance state. The two steps aretreated as two parts of the whole process. The total process might simulate photosynthesis:the first step is the capture of light in the light-harvesting complex while the second step is the transfer of the excitation energy to the reaction center which stores the energy from the photon in chemical bonds.That means: gain simulates the acceptor for light, and loss stands for the donor which excites a resonance state and simulates the coupling to thesink.Altogether, the energy of the light is transferred to the reaction center of the light-harvesting complex. The obtained results are very robust, and fluctuations play animportant role.Theresults show some characteristic featureswhich correspond, indeed, to those discussedin the literature for the photosynthesis. Most interesting are the following results of our calculations: * theefficency of energy transfer isnearly 100 %; * the energy transfer takes place at a very short time scale; * the storage of the energy in the reaction center occurs at a much longer time scale;* according to points 2 and 3, the decay is bi-exponential. In future studies, the theoretical results have to be confirmedby application of the formalism to the description of concretesystemsin close cooperation between theory and experiment.AcknowledgmentWe are indebted to J.P. Bird for valuable discussions. § EXCEPTIONAL POINT IN THE SECOND CHANNELAt an EP the two eigenfunctions of a non-Hermitian Hamilton operator H are exchanged according to (<ref>). In more detail: tracing the eigenfunctions of H as function of a certain parameter b, the two eigenfunctions jump according to(<ref>) at the critical parameter value b=b^ cr which defines the position of the EP. Some years ago, it has been shown<cit.> that the influence of the EP is notrestrictedto the jump occurring at b^ cr. It appears rather in afinite parameter range of b in which the wavefunctions of the two states are mixed according toΦ_i^ ch = β_k Φ_k ± i β_l Φ_lwith kl.The two wavefunctions Φ_i^ ch vary smoothly (i.e. without any jump of the sign of their components)everywhere but atb=b^ cr.Using the representation Φ_i^ ch =|Φ_i^ ch|e^iθ_i θ_i depends on the parameter b. After removing a common phase factor, it follows θ_i^(1)→π /4 and ± 3 π /4, respectively, in approachingb=b^ cr; and θ_i^(2)→ 0 or π when b is far from the critical region around b^ cr. In between the valuesθ_i^(1) and θ_i^(2),the angle θ_ivaries smoothly. Corresponding to the dependence of θ_i on the parameter b, also the phase rigidity (<ref>) depends on this parameter in a certain finiteparameter range. The phase rigidity is determined by the ratior_i ≡ Re(β_k+β_l)^2/|β_k + β_l|^2 which approaches 0 at the EP (due to |β_k + β_l|^2 →∞) and1 far from the EP (because here the wavefunctionsare almost real). The intermediate values of the phase rigidity r_iare determined by the expression (<ref>)calculated with the actual values β_k and β_l.The results of these calculation give values for r_ithat are between the two limiting values 0 and 1.Fig. <ref> shows numerical results for the eigenfunctions of Hwhich are obtained for calculations with two channels andtwo states in the second channel. We see the valuesr_ifor the states of the first channel(see Figs. <ref> and <ref>) as well as those for the two states of the second channel.They are independent of one another. The r_irelated tothe second channel areconstant in the whole parameter range a shown in the figures (which is defined for the first channel).They may be smaller than 1, seeFig. <ref>. The results obtained for the phase rigidity r_i of the two states in the second channel, may be considered as a proof of the finite parameter range in whichan EP influencesthe properties of the system. The valuer_i is nothing but an expression forthe distance of the systemfrom an EP in the second channel: the larger r_i, the more distant is the EP, while r_i → 0 indicates that the EP is approached. Thus, thevalue r_iin the eigenfunction pictures of the two-channel system with two states in the second channel additionally to those of the one channel system (Figs. <ref>and <ref>),allows us to determine the position of the EP in the second channel. 99 engel G. S. Engel, T. R. Calhoun, E. L. Read, T.-K. Ahn, T. Mancal,Y.-C. Cheng, R. E. Blankenship, and G. R. Fleming, Nature 446, 782(2007)engel2 H. Lee, Y.-C. Cheng, and G. R. Fleming, Science 316, 1462 (2007)R.J. Sension, Nature 446, 740 (2007)engel-ed M. Mohseni, Y. Omar, G.S. Engel, and M.B. Plenio (eds.), Quantum Effects in Biology, Cambridge University Press, Cambridge, UK, 2014fleming H. Dong and G. R. Fleming,J. Phys. Chem. B 118, 8956 (2014)romeroE. Romero, R. Augulis, V.I. Novoderezhkin, M. Ferretti, J. Thieme, D. Zigmantas, and R. van Grondelle, Nature Physics 10, 676 (2014)S.F. Huelga and M.B. Plenio, Nature Physics 10, 621 (2014)briggsJ. S. Briggs and A. Eisfeld, Phys. Rev. E 83, 051911 (2011) huelga F. Caruso, A.W. Chin, A. Datta, S.F. Huelga, and M.B. Plenio, J. Phys. Chem. C 131, 105106 (2009)scully1 M.O. Scully,Phys. Rev. Lett. 104, 207701 (2010)E. A. Sete, A. Svidzinsky, H. Eleuch, R. D. Nevels and M. O. Scully Journ. Mod. Opt. 57, 1311 (2010)A.A. Svidzinsky, K.E. Dorfman, and M.O. Scully, Phys. Rev. A 84, 053818 (2011)lakhno V.D. Lakhno, Journ. Biological Physics 31, 145 (2005)top I. Rotter, J. Phys. A 42, 153001 (2009)ropp I. Rotter and J.P. Bird,Rep. Prog. Phys. 78, 114001 (2015)klroP. Kleinwächter and I. Rotter, Phys. Rev. C 32, 1742 (1985)au-zelN. Auerbach and V. Zelevinsky, Rep. Prog. Phys. 74, 106301 (2011)celardo1 G.L. Celardo, F. Borgonovi, M. Merkli, V.I. Tsifrinovich, and G.P. Berman, J. Phys. Chem. C 116, 22105 (2012)celardo2 D. Ferrari, G.L. Celardo, G.P. Berman, R.T. Sayre, and F. Borgonovi, J. Phys. Chem. C 118,20 (2014) G.L. Celardo, G.G. Giusteri, and F. Borgonovi, Phys. Rev. B 90,075113 (2014)G.G.Giusteri, G.L. Celardo and F. Borgonovi,Phys. Rev. E 93, 032136 (2016)berman1 G.P. Berman, A.I. Nesterov, G.V. Lopez, and R.T. Sayre, J. Phys. Chem. C 119, 22289 (2015)proj10 H. Eleuch and I. Rotter, Phys. Rev. A 95, 022117 (2017)kato T. Kato,Perturbation Theory for Linear Operators, Springer, Berlin 1966nest1-2 A.I. Nesterov, G.P. Berman, and A.R. Bishop, Fortschr. Phys. 61, 95 (2013)A.I. Nesterov, G.P. Berman, J.M.S. Martinez and R.T. Sayre, J. Math. Chem. 51, 2514 (2013)nest3-4A.I. Nesterov and G.P. Berman,Phys. Rev. E 91, 042702 (2015)A.I. Nesterov and G.P. Berman,Phys. Rev. E 91, 052702 (2015)comment3 We underline that we consider open quantum systems with gain and loss. This should not be confused with theconsideration of exactly balanced gain and loss in PT-symmetric systems which are neither open nor closed, but nonisolated according to thedefinition in, e.g.,C.M. Bender, Journal of Physics:Conference Series 631, 012002 (2015).comment1 In contrast to the definition that is used in,for example, nuclear physics, we define the complex energiesbefore and after diagonalization of H byε_k = e_k + i/2γ_k and E_k = E_k + i/2Γ_k, respectively,with γ_k ≤ 0 and Γ_k ≤ 0 for decaying states.This definition is useful when discussing systems withgain (positive widths) andloss (negative widths).comment2 The coalescence of two eigenvalues of a non-Hermitian operator should not be confused with the degeneration of two eigenstates of a Hermitian operator. The eigenfunctions of two degenerate states are different and orthogonal while those of two coalescing states are biorthogonal and differ only by a phase, see Eq. (<ref>).ro01I. Rotter, Phys. Rev. E 64, 036213 (2001)magunovA.I. Magunov, I. Rotter and S.I. Strakhova,J. Phys. B 34, 29 (2001) gurosa U. Günther, I. Rotter and B.F. Samsonov, J. Phys. A 40, 8815 (2007)berggren B. Wahlstrand, I.I. Yakimenko, and K.F. Berggren, Phys. Rev. E 89, 062910 (2014)berggren2 F. Tellander and K.F. Berggren, Phys. Rev. A 95, 042115 (2017)savin2 J.B. Gros, U. Kuhl, O. Legrand, F. Mortessagne, E. Richalot, and D. Savin,Phys. Rev. Lett. 113, 224101 (2014)ro03 I. Rotter,Phys. Rev. E 68, 016211 (2003)epj1 H. Eleuch and I. Rotter,Eur. Phys. J. D 69, 229 (2015)comment5 Numerical calculations with γ_i^(2)≈γ_i^(1) have shown that, in such a case, the states of the second channel mix also via the first channel. The obtained results are abstruse from the point of view of physics, although they are mathematically correct. pra93 H. Eleuch and I. Rotter, Phys. Rev. A 93,042116 (2016)epj2 H. Eleuch and I. Rotter,Eur. Phys. J. D 69, 230 (2015)
http://arxiv.org/abs/1702.08257v2
{ "authors": [ "Hichem Eleuch", "Ingrid Rotter" ], "categories": [ "physics.bio-ph", "physics.chem-ph", "quant-ph" ], "primary_category": "physics.bio-ph", "published": "20170227125052", "title": "Gain and loss in open quantum systems" }
Barry McKernan Dept. of Science, CUNY-BMCC, 199 Chambers St., New York NY 10007 Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Dept. of Science, CUNY-BMCC, 199 Chambers St., New York NY 10007 Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Dept. of Physics, CUNY-QCC, Bayside, New York NY 11364 Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 Institute of Physics, Eötvös University, Budapest 1117, Hungary Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027 Dept. of Physics, CUNY-Lehman, New York NY 10468 Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Stanford Institute for Theoretical Physics, Stanford University, CA 94306 Dept. of Astrophysics, American Museum of Natural History, Central Park West, New York, NY 10028 Black hole mergers detectable with LIGO can occur in active galactic nucleus (AGN) disks. Here we parameterize the merger rates, the mass spectrum and the spin spectrum of black holes (BH) in AGN disks. The predicted merger rate spans ∼ 10^-4-10^4Gpc^-1yr^-1, so upper limits from LIGO (<212Gpc^-1yr^-1) already constrain it. The predicted mass spectrum has the form of a broken power-law consisting of a pre-existing BH powerlaw mass spectrum and a harder powerlaw mass spectrum resulting from mergers. The predicted spin spectrum is multi-peaked with the evolution of retrograde spin BH in the gas disk playing a key role. We outline the large uncertainties in each of these LIGO observables for this channel and we discuss ways in which they can be constrained in the future. § INTRODUCTION The gravitational wave (GW) events detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) correspond to the merger of stellar mass black holes (BH) considerably more massive than those observed in our own Galaxy. The upper end of the range of BH merger rates derived from LIGO observations of 212 Gpc^-3 yr^-1 <cit.> requires consideration of locations where BH mergers can occur faster than expected fromGW emission alone. Among the first few LIGO detections are possible low value spin or misaligned spins, which may be problematic for models of binary evolution <cit.>. WhileBHs with larger than expected massescan occur naturally in the field <cit.>, they are more likely to form in regions with concentrations of BHs, such as galactic nuclear star clusters<cit.>. Massive gas disks in active galactic nuclei (AGN) provide natural locations for gas accretion and repeated mergers because the gas disk can drive migration of BH towards migration traps, reduce the inclination of intersecting orbits, enable binary formation, and harden existing binaries. Together, these effects can result in rapid increase in the mass of embedded BHs, potentially to observed values <cit.>.In this paper we parameterize the expected merger rate, and the mass and spin distributions from thischannel for comparison with the LIGO observations, and we discuss how observations and simulations can constrain these predictions. § MODEL OUTLINE Galactic nuclei likely contain some of the densest concentrations of BHs in the Universe <cit.>, so it is natural to look for BH mergers in galactic nuclei <cit.>. While BH binary mergers can occur at modestly enhanced rates (compared to the field) in nuclear star clusters just from dynamical binary hardening <cit.>, or capture from single-single <cit.> and binary-single encounters <cit.>, a dense nuclear disk of gas can greatly accelerate the rate of BHB formation and merger <cit.>.The simplest picture of this LIGO channel begins with a spherical distribution of BH, stars and other stellar remnants orbiting in the central pc^3 of a galactic nuclei around a supermassive black hole (SMBH). Next, around the SMBH, we add a massive gas disk, which can be geometrically thin or thick. A fraction f_ co of the initial number of BH in the nucleus N_BH, will have orbits coincident with the disk and approximately half of these orbits should be retrograde compared to the disk gas. Yet another fraction f_ g of the population N_BH intersect the disk on their orbits and are ground down into the plane of the disk within the AGN disk lifetime (τ_ AGN). Thus an overall fraction f_d=f_ co+f_ g of nuclear BH end up embedded in the disk, and quickly have their orbits damped and circularized by gas drag <cit.>. The net torques from disk gas causes BH to migrate within the disk and encounter each other at low relative velocities <cit.>. BH binaries that form in the disk are expected to merge efficiently due to gas torques <cit.>. BH mergers may preferentially occur in convergence zones containing migration traps <cit.> which occur in semi-realistic models of AGN disks <cit.>. Multiple objects trapped in such orbits collide efficiently rather than being ejected (<cit.>; Secunda, Bellovary et al. (2018) in prep.). In this paper, we examine what constraints can be put on the merger rate and the BH spin and mass distributions for this AGN channel.§ RATE OF BLACK HOLE BINARY MERGERS IN AGN DISKS We parameterize the rate of BH-BH mergers in AGN disks simply as:R=N_GN N_BH f_AGN f_df_bϵ/τ_AGNwhere N_GN (Mpc^-3) is the average number density of galactic nuclei in the Universe, f_AGN is the fraction of galactic nuclei that have active AGNs which last for time τ_AGN, f_d=f_co+f_g is the fraction ofnuclear BH that end up in the disk, f_b is the fraction of BH in BH-BH binaries in the disk, and ϵ represents the fractional change in number N_BH of BH in the centralregion (∼pc^3) over a full AGN duty cycle [If ϵ∼ 1 then N_BH is approximately conserved between AGN episodes. If ϵ (>)<1N_BH (grows) shrinks between AGN phases due to the net effect of mergers, infall of new BH, stellar evolution etc..] R can be parameterized as: R = 12 Gpc^-3yr^-1N_GN/0.006 Mpc^-3N_BH/2 × 10^4f_AGN/0.1× f_d/0.1f_b/0.1ϵ/1(τ_AGN/10Myr)^-1. However, if we want to constrain the constributions of this channel to LIGO observations, it is much more useful to show the allowed range of R and the range of each of the contributing factors from eqn. (<ref>), which we list in Table <ref>.The N_GN lower limit corresponds to galaxies with stellar mass greater than or equal to that of the Milky Way <cit.> as measured from Schechter function fits to galaxy luminosity functions <cit.>. The N_GN upper limit corresponds to dwarf galaxies with stellar mass > 10^9_⊙ <cit.>, which includes all locally observed SMBH (≥ 10^5_⊙) inferred from M-σ studies of galaxies and dwarf galaxies <cit.>. Also in Table 1, N_BH∼ 10^3 corresponds to the number of BH allowed ≤ 0.1 pc^-3 of Sgr A* according to the distribution of the S-star orbits <cit.>, whereas N_BH∼ 10^6 pc^-3 seems to be the maximal density allowed by simulations <cit.>.The lower limit to f_AGN assumes only quasar disks are efficient BH merger sites and f_AGN∼ 0.3 assumes all LINER galactic nuclei <cit.> consist of advection dominated accretion flows (ADAFs) with high accretion rate <cit.>, capable of driving BH mergers. The binary fraction of BH f_b has been estimated to be as high as f_b∼ 0.2 <cit.>), but dynamically hot environments such as star clusters, could actually yield very low binary fractions f_b≤ 0.01 over time in the absence of gas <cit.> due to the large number of 'ionizing' interactions, so we choose f_b=[0.01,0.2] in Table 1. Reasonable estimates of τ_AGN span 0.1-100Myr <cit.>. R will be highest if AGN episodes are short-lived but frequently repeated and efficient at BH mergerse. These circumstances ensure that there are multiple opportunities for BH in a galactic nucleus to encounter each other at low relative velocity and merge in a disk.c|ccc Parameter ranges in Eqn. <ref>. Parameter Lower UpperN_GN^a(Mpc^-3) 4×10^-3 10^-2N_BH^b(pc^-3)10^3 10^6 f_AGN^c0.01 0.3f_b0.01 0.2f_d^d0.01 0.7τ_AGN(Myr)1100ϵ0.5 2R(Gpc^-3 yr^-1) 10^-4 10^3Range of parameters in Eqn. (<ref>) and range of merger rate (see text). ^a from <cit.>. ^b from <cit.>. ^c f_AGN∼ 0.1 for Seyfert AGN <cit.>. f_AGN∼ 0.3 with all LINERs and other low luminosity AGNs. ^d f_d=f_co+f_g. f_co comes from h/R, the disk aspect ratio. h/R ∼0.01–0.1 <cit.>. h/R ∼ 10^-3– 0.1 <cit.>. h/R ∼ 0.1–0.7 in super-Eddington ADAFs <cit.>. f_g depends on h/R, ρ_ disk and τ_AGN. From Table 1, the allowed range from Eqn. (<ref>) is R∼ 10^-4–10^4^-3^-1.The upper bound to the LIGO BH binary merger rate of ∼ 240 ^-3^-1 already rules out upper limits to most parameters in Table 1 [The LIGO rate upper bound places a lower limit on ϵ, since a small value of ϵ suggests most BH in AGN are consumed in mergers and would imply a much greater R than observed] and allows actual astrophysical limits to be placed on models of AGN disks by LIGO BH merger detections. Future observational constraints and simulation results will, however, be required to figure out which upper limits are ruled out by LIGO. For example, the upper limit to N_GN could be reduced by contrasting activity rates as a function of galactic mass in a complete sample. The inferred N_BH can be constrained via population studies of the X-ray emission from binaries around Sgr A* and in M31, as well as via dynamics studies of the number density of BH allowed from the orbital parameters of stars in galactic nuclei. The upper limit on f_AGN can be reduced if we can observationally distinguish between high- and low-accretion rate LINERs. Simulations that include a spherical component of individual stars and BH as well as migrating objects in the disk are required to properly constrain f_b. Encounters between objects from the spherical dynamical component and the disk dynamical component will occur at relatively high velocity and can therefore ionize sufficiently soft, large radius, binaries. Thus, in order for f_b to be moderately large in this channel, we require f_g to be large, since otherwise the rate of ionizing encounters can ionize binaries <cit.>. So limits on f_g from semi-analytic approaches or simulations <cit.> can also help constrain f_b.Uncertainties in R are dominated mainly by lack of knowledge of the distribution and number of BH in galactic nuclei, how efficiently gas disks can grind down orbits, and whether geometrically thick disks can efficiently merge BHs. Understanding multiple-object migrationand the role of retrograde orbiters is another key area for future work. § CONSTRAINING BH MASSES By merging BHs in AGN disks, we expect 'overweight' BH to result <cit.>. To investigate the range of BH masses involved in mergers in this channel, we use a toy model calculation of the evolution of a population ofBH embedded and migrating in an AGN disk. We made many simplifying assumptions: there are no BH binaries to begin with (f_b=0), BH remain in the disk after merger, tertiary encounters are neglected, no BHs merge with the SMBH, no new BH are added to the population (f_g=0) and we ignore mass growth due to gas accretion. We began with a uniform distribution of BH drawn from a <cit.> initial mass functionN_BH(M) ∝ M^-γ_0, with γ_0=2.3 distributed over three mass bins (5,10,15M_⊙) and chose normalization N_BH(5_⊙)=10^3. A BH on a prograde orbit in an AGN disk with mass M_1 will migrate on a (Type I) timescale <cit.> t_ mig ≈38Myr(N/3)^-1(R_b/10^4r_g)^-1/2(M_1/5M_⊙)^-1× (h/R_b/0.02)^2(Σ/10^5kg m^-2)^-1( M_ SMBH/10^8M_⊙)^3/2where N is a numerical factor of order 3. So the toy model population outlined above will evolve over time. If 10^3 BH are uniformly distributed across a disk of radius R_d∼ 10^5r_g ,(r_g=GM_ SMBH/c^2), BH orbits are separated by ∼ 10^2r_g on average. This separation could be closed in ∼ 0.4 Myr from eqn. (<ref>). Our initial distribution of singleton BH separated by ∼ 10^2r_g on average will therefore evolve from f_b=0 towards f_b∼ 0.5 within ∼ 0.4Myr due to migration. The probability of encounter between BH of masses M_1, M_2 in time Δ t isP(M_1|M_2) ∝N(M_1)N(M_2)/t_ mig(M_1) t_ mig(M_2).When a pair of BHs approaches within their binary Hill radius R_H =(q/3)^1/3R_b, where q is the binary mass ratio and R_b is the radius of the binary center of mass, gas drag can cause them to merge rapidly. <cit.> showed that binary semi-major axis a_b halves due to gas drag in only 200(1000) orbits about the binary center of mass for a retrograde (prograde) binary compared to gas velocity. Using this result, a BH binary with a_b=R_H at R_b∼ 10^3r_g has a characteristic timescale for binary hardening of 0.4 kyr (8 kyr) in the retrograde(prograde) case. Only 20–25 such halvings (corresponding to ∼ 0.1–0.2 Myr, naively assuming a constant gas hardening rate) would shrink a_b sufficiently that GW emission takes over and the merger happens promptly. The gas hardening rate may be even faster than this estimate since more gas enters the binary's Hill sphere as it shrinks <cit.>, which may pump binary eccentricity. However, gas torques may decrease in efficiency once the binary has hardened sufficiently that the binary velocity is substantially supersonic compared to most gas within the Hill radius <cit.>. For our toy model, we therefore assume ∼ 0.1Myr is the minimum gas hardening timescale to merger, but we note that the actual gas hardening timescale could take up to an order of magnitude longer. In our toy model, if the typical time for a BH to encounter another BH in the disk is ∼ 0.4Myr, then adding an additional ∼ 0.1-1Myr for a gas-hardening timescale, yields a characteristic time to merger of ∼ 0.5-1.5Myr in our model. So, we expect that around half the initial population of our toy model will have encountered each other and merged in this time. In calculating the evolution of our toy model, we chose Δ t ∼ 0.1-0.3Myr to correspond to a time when ∼ 10% of the initial population of lowest mass BHs (5_⊙) have encountered each other and merged. All other encounters are normalized to this encounter rate. For simplicity, we assume all binaries formed in Δ t merge within that time, and we neglect the mass-energy loss from the mergers. After Δ t, all BH that merged are removed from their original mass bins, and the newly merged object is added to the appropriate mass bin. Figure <ref> demonstrates the simplistic evolution expected as the initial BH distribution (black line) evolves to the red curve in time step Δ t ∼ 0.1-0.3Myr, where ∼ 10% of the lowest mass BHs in the initial (black) distribution have merged. The red curve evolves to the blue curve after an additional Δ t^'∼ 0.2-0.6Myr, when ∼ 10% of the lowest mass BH on the red curve are expected to merge. The BH mass distribution in our toy model flattens from γ_0 = 2.3 to γ∼ 2 as low-mass BH are consumed.Now assume that BH from the non-disk spherical population, interact with the disk and their orbits are ground down into the disk, i.e. f_g>0. The addition of some of the (initially) spherical BH population into the disk will support the BH mass distribution in the disk at the low mass end. So an initial power law distribution ∝ M^-γ_0 of BH mass will evolve towards a broken-power law distribution of the formN_BH∝{[N_1M^-γ_1M<M_ break; N_2M^-γ_2 M>M_ break ], .where γ_2 < γ_1, N_1/N_2∼(f_g/f_co), where f_co is the fraction of BH initially in the disk and on average f_g is the fraction of BH ground down into the disk over τ_AGN/2 and M_ break lies near the upper end of the inital mass range (∼ 15 _⊙ in our toy model). In order to include gas accretion in this toy model, we assumed a gas accretion rate for BH on [retrograde, prograde] orbits of Ṁ_1∼ [10^-2,1] Ṁ_ Edd, where Ṁ_ Edd = 4π G M_1 m_p/η c≈2.2 × 10^-7M_⊙/yr(η/0.1)^-1( M_1/10M_⊙) is the Eddington mass accretion rate with m_p the proton mass and η the accretion luminosityefficiency. Over an AGN disk lifetime of τ_AGN∼ 10Myr, we can neglect gas accretion onto BH on retrograde orbits. c|cc Parameter ranges in BH masses. Parameter Lower Upper M_b (_⊙)(γ=2) 10100 M_b (_⊙)(γ=1) 10500 M_b (_⊙)(γ=broken) 10500 q(γ=2) 0.1 1 q(γ=1) 0.01 1q(γ=broken) 0.01 1Parameter ranges predicted for BH binaries in this channel, assuming initial BH mass range 5–15 M_⊙ and uniform distribution of BH (see text).In Table <ref> we list parameter ranges for BH masses on the basis of the probabilistic toy model outlined above for three different assumptions: 1) N_BH∝ M^-2 (roughly the blue curve in Fig. <ref>), corresponding to a short lived disk with f_ co≫ f_ g. 2) N_BH∝ M^-1, corresponding either to a long lived disk (τ_AGN>10Myr) or efficient gas hardening with a low rate of orbit grind down (f_ co≫ f_ g). 3) N_BH∝ M^-2(M^-1.5) for M<15M_⊙(>15M_⊙), corresponding either to efficient orbit grind down (f_ g∼ f_ co), or efficient stellar formation and evolution in the disk with a new top-heavy IMF. In Table <ref> we list the binary mass ratio M_b range for each set of assumptions. The lower limit to M_b is trivially the lowest possible mass binary drawn from the initial mass distribution, with no growth from gas accretion and the upper limit to M_b is simply the highest mass binary in the distribution. Also listed in Table <ref> are the range of mass ratios (q) of the binaries in the three different scenarios, with the lower limit given by the range of BH masses allowed in the three different distributions and q=1 is the trivial upper limit. If the fraction of BH ground down into the disk f_ g(t) ≥ f_ co(t), the fraction of BH coincident with the disk, which will be true for relatively long-lived, thin (h/R ≪ 1) disks, the BH mass spectrum evolves from an initial power-law distribution to a broken power-law as in Eqn. (<ref>) with γ_1∼γ_0 > γ_2. The uncertainty in mass estimates for this channel is driven mainly by the initial mass distribution of BH in the central region, as well as the ratio of f_ g(t)/f_ co(t), which in turn depends on disk density and h/R. § RANGE OFBH SPINS As black holes in the AGN disk accrete gas and merge with each other, their initial spin distribution will change with time. Assuming a uniform distribution of spins (a) and angular momenta (L) for BH in galactic nuclei, there will be four distinct populations of BHs in AGN disks as follows:* Prograde spin, on prograde orbits, denoted by (a^+,L^+). * Prograde spin, on retrograde orbits (a^+,L^-).* Retrograde spin, on prograde orbits (a^-,L^+).* Retrograde spin, on retrograde orbits (a^-,L^-).We expect the fraction f_ co of BH co-orbital with the AGN disk should have an initial uniform distribution across all four BH populations.The four BH populations will evolve differently due to gas accretion. The (a^+,L^+) population rapidly accretes gas, spins up, and aligns spins with the disk gas once the BH has accreted a few % of its own mass<cit.>, i.e. in < τ_AGN. An initially uniform spin distribution a^+=[0,+0.98] evolves towards a^+∼ 0.98 at an average rate ∼ (τ_AGN/40Myr)(ṁ/Ṁ_ Edd) where ṁ/Ṁ_ Edd is the average gas accretion rate as a fraction of the Eddington rate (which takes ≈ 40Myr to double mass). By contrast, the (a^+,L^-) population faces a strong headwind, so it accretes very weakly from the gas. An initially uniform distribution of spins in this population will remain uniform over τ_AGN. The (a^-,L^+) population spins down towards a ∼ 0 after an increase of mass by a factor √(3/2) <cit.> and will then join the (a^+,L^+) population. The (a^-,L^-) population spins down more slowly due to the headwind and so an initial uniform distribution of spins remains uniform over τ_AGN.BH mergers will further complicate the spin evolution of the four BH populations. The four populations interact due to migration and form binaries if captured within the binary Hill sphere. Binary orbital angular momentum (L_b) is the dominant contributor to the spin of the merged BH binary so equal mass BH mergers yield merger products with |a|∼ 0.7 <cit.>. Binaries can form with prograde or retrograde orbital angular momentum compared to the disk gas (denoted by L_b^±). If a binary forms with retrograde orbital angular momentum (L_b^-),the merger is faster than in the prograde case <cit.>, and the merger product will have a^-=-0.7 (i.e. retrograde spin compared to disk gas). Thus the fastest growing of the four populations of BH in the disk due to mergers will actually be (a^-,L^±). This population evolves towards low spin (a ∼ 0) due to gas accretion, at an average rate ∼ (τ_AGN/40Myr)(ṁ/Ṁ_ Edd). Among the initial fraction f_co of co-orbital BHs, we expect equal numbers of prograde to retrograde orbits. However, since prograde orbits are ground down faster (smaller headwind, greater Bondi radius), we expect (a^±,L^+)/(a^±,L^-) ≈ 1+(f_g/f_co).Applying all of this to our toy model above allows us to construct the spin distribution in Fig. <ref>. An initial uniform spin distribution (black line) evolves towards the solid red curve after Δ t ≈ 0.1-0.3Myr. The corresponding mass distribution is the red curve in Fig. <ref>. The red solid curve in Fig. <ref> shows a prominent peak at a=-0.7 due to a × 5 faster merger rate of retrograde binaries and a smaller peak at a=+0.7 due to mergers of prograde binaries. Both peaks are smeared out towards the right by gas accretion during Δ t and will consist of BH masses ≥ 10M_⊙ from the initial mass distribution. Some pile-up is happening at a>0.95 due to gas accretion onto the already near maximal spinners of the (a^+,L^+) population. The red dashed curve shows what happens if we assume gas accretion can occur at super-Eddington rates onto BH in the disk ( × 5 the Eddington rate). In particular themore massive merged population at a ∼ -0.7 gets quickly smeared out and driven towards low spin. Thus, from Fig. <ref> if LIGOconstrain the spins of most merger precursor BHs to be small, the AGN channel requires super-Eddington accretion onto initially retrograde spin BH to grow this population.Only the (a^+,L^+) population will align or anti-align relatively quickly with the AGN disk gas. Assuming the (a^+,L^+) population are all aligned or anti-aligned with the disk gas, by drawing randomly from a uniform distribution across (a^±,L^±), there is a ≈ 1/16 chance that both BH have (anti-)aligned spins and represents our lower limit for the fraction of BH (anti-) aligned with disk gas. If f_ g(t) ≫ f_ co(t), then effectively the two populations (a^±,L^+) will dominate sof_± align≈ 1/4, which is our approximate upper limit for the fraction of BH (anti-) aligned with disk gas. Our estimates of f_± align suggest that a larger population of mergers will be requied to test this channel in population spin studies than estimated by <cit.>. Anti-aligned binaries in the AGN disk allow LIGO a unique chance to test the spin precession instability <cit.>.Once a BH binary merges, the resulting merger product can experience a gravitational radiation recoil kick of v_ kick∼20–400 km s^-1, depending on relative spins and mass ratios <cit.>. The result ofkicks from mergers between aligned and anti-aligned objects is to incline the merger product's orbit relative to the AGN disk byθ=tan^-1(v_kick/v_ orb) where v_ orb is the orbital velocity of the binary center of mass. Since v_ orb≫ 400km/s in most of the disk, the orbital inclination perturbation is at most a few degrees and the merger product could be ground back down into the disk in time <τ_AGN. Mergers of BH with spins out of alignment with the plane of the disk and each other can produce the largest magnitude kicks (up to several thousand kilometers per second) <cit.>. Such mergers will be rare, but will produce large kicks (∝ q^2/(1+q)^4 in the mass ratio q, <cit.>), escape the disk at angle θ and may not be ground back down within τ_AGN.Table <ref> summarizes the ranges allowed for spins in this LIGO channel. The typical spin distribution depends on the relative fractions of the four populations of BH in the disk (a^±,L^±) and their evolution as f_g/f_co changes, driven in turn by disk aspect ratio (h/R) and the disk gas density and τ_AGN.We expect an initial population uniform across(a^±,L^±), but (a^±,L^+) will grow with the fraction f_ g(t) of BH ground-down into the disk. Peaks will arise in the spin distribution at a∼ -0.7,+0.7 due to mergers and gas accretion will drive a^-→ 0 and a^+→ 0.98 independent of mergers. Gas accretion at super-Eddington rates plus faster mergers by retrograde binaries may be required to generate a population of overweight, low spin BH in the AGN disk. c|cc Parameter ranges in BH spins. Parameter Lower Upper a^+ (L^+) 00.98 a^- (L^+) -0.980a^+ (L^-) 0.00.98a^- (L^-) -0.980a_ merge -0.7+0.7f_± align 0.06 0.25Parameter ranges allowed for BH spins in this channel (see text).§ OBSERVATIONAL CONSTRAINTS: GW Binary black hole mergers in an AGN disk imply unique, testable predictions that would not be expected from other BH merger channels, including: 1. A spin distribution (see <ref>) that includes aligned/anti-aligned spin binaries and 2. a population of overweight BH or IMBH orbiting SMBHs, generating GWs detectable with the Laser Interferometer Space Antenna (LISA) <cit.>.A circularized IMBH-SMBH binary at a migration trap (a_b∼ 10^2r_g) around a SMBH with M_ SMBH<10^7_⊙ will be detectable with LISA at modest signal-to-noise ratio in a year's observation <cit.>. If AGN disks are efficient at gas-driven mergers of BH, we expect that every AGN must contain one or more IMBH-SMBH binaries, implying an approximate rate comparable to that in <cit.>. § OBSERVATIONAL CONSTRAINTS:EMThe brightest AGN are too bright compared to any short-termEM signal that might result from a BH merger in a gas disk. Low luminosity AGN might permit short timescale EM events from BH mergers to be visible.As IMBHs grow in migration traps, gaps and cavities in the accretion flow can formand oscillations on the dynamical timescale of the accreting IMBH can be detected in optical, UV, and X-ray spectral signatures <cit.>. Temporal and energetic asymmetries in the X-ray signatures are best detected using micro-calorimeters, such as the one that will fly on the X-ray Astronomy Recovery Mission succeedingHitomi. Perturbations of the innermost disk will occur as migrators in the disk plunge into the SMBH and temporarily dominate the local co-rotating mass, detectable in large UV-optical quasar surveys <cit.> as well as the X-ray band. Large optical surveys of quasar disks can also limit total supernova rates due to migrating/accreting/colliding stars <cit.>, in turn placing limits on the disk populations of stars and stellar remnants. Estimates of the rates of transits by bloated stars, best detected in the X-ray band <cit.>, can put limits on the population on spherical orbits around and passing through AGN disks.As the AGN phase ends, remaining BH will interact dynamically, so the distribution of orbital parameters of the BHs and stars entrained in the disk will relax. <cit.> show that if very massive stars (>10^2M_⊙) exist in our own Galactic nucleus, they can pump the eccentricity distribution of massive stars to even e ∼ 0.4 within 5 Myrs. However, such stars are short-lived and observed stellar eccentricities reach e ∼ 0.7 <cit.>. On the other hand, a population of overweight BHs caused by merger in an AGN disk can rapidly pump stellar orbital eccentricites post-AGN and inflate the thickness (h/R) of stellar disks in galactic nuclei. Thus, if this BH merger channel is efficient, thin disks of stars will not be observed in post-AGN galactic nuclei. Neutron stars (NS) should also exist in AGN disks, and can migrate. So there should be a correlation between NS-NS and NS-BH mergers in AGN disks and the rate of BH-BH mergers expected from this channel. No correlation has been observed so far between short gamma-ray bursts in the local universe and AGNs <cit.>, but so far, only a handful of short gamma-ray bursts have sufficiently accurate positions in the sky to rule out an association with AGN in these cases. The efficiency of this LIGO channel could be further constrained by ongoingstudies of the correlation of short gamma-ray bursts with AGN. Future simulations could usefullyfocus on the expected distribution of NS in mass segregating clusters in galactic nuclei, and ultimately on determining the expected NS merger rate in AGN disks.§ CONCLUSIONS We parameterize the rate of black hole mergers within AGN disks and the mass and spin distributions that result. The strongest observational constraints can be placed on this channel by: 1. ruling out a population of maximal spin BH via LIGO, 2. ruling out a correlation betwen short gamma-ray bursts and AGN, 3. constraining the rate of obscured supernovae in AGN disks via studies of large samples of AGN, 4. ruling out a population of high accretion rate ADAFs in galactic nuclei and 5. observing very thin disks of stars in nearby Galactic nuclei. Future simulations should focus on 1. the ratio of NS/BH in nuclear star clusters undergoing mass segregation, 2. encounters between prograde and retrograde orbiters in AGN disks and 3. interactions and binary formation between BHs with pro- and retro-grade spins and orbits at migration traps in a range of AGN disk models.If AGN are efficient at merging BH, LISA will detect a large population of IMBH in disks around SMBH in the nearby Universe. § ACKNOWLEDGEMENTS.Thanks to Maya Fishbach, Davide Gerosa, Matthew Graham, Daniel Holz, Dan Stern and Nick Stone for useful conversations. BM & KESF are supported by NSF PAARE AST-1153335 and NSF PHY11-25915. BM & KESF thank CalTech/JPL and NASA GSFC for support during sabbatical. M-MML is partly supported by NSF AST11-09395.[Abbott et al. (2016a)]Abbott16a Abbott B.P. et al., 2016, PhRvL, 116,1102 [Abbott et al. (2016b)]Abbott16b Abbott B.P. et al., 2016, ApJL, 833,L1 [Antonini et al. (2014)]Antonini14 Antonini F., 2014, ApJ, 794, 106 [Antonini & Rasio (2016)]AntRas16 Antonini F. & Rasio F., 2016, ApJ (submitted), arXiv:1606.04889 [Alexander et al. (2007)]Alex07 Alexander R.D., Begelman M.C. & Armitage P.J., 2007, ApJ, 654, 907 [Baldry et al. (2012)]Baldry12 Baldry I.K. et al., 2012, MNRAS, 421, 621[Bardeen (1970)]Bardeen70 Bardeen J.M., 1970, Nature, 226,64[Bartos et al. (2017)]Bartos17 Bartos I., Kocsis B., Haiman Z. & Márka S., 2017, ApJ, 835,165[Baruteau et al. (2011)]Baruteau11 Baruteau C., Cuadra J. & Lin D.N.C., 2011, ApJ, 726, 28 [Bellovary et al. (2016)]Bello16 Bellovary J., Mac Low M.-M., McKernan B. & Ford K.E.S., 2016, ApJ, 819, L17 [Belczynski et al. (2010)]Belczynski10 Belczynski K., Bulik T., Fryer C.L., Ruiter A., Valsecchi F., Vink J.S. &Hurley J.R., 2010, ApJ, 714, 1217 [Berger (2014)]Berger14 Berger E., 2014, ARA&A, 52, 43 [Bogdanovic et al. (2007)]Bogdanovic07 Bogdanovic T., Reynolds C.S. & Miller M.C., 2007, ApJ, 661, L147 [Campanelli et al. (2007)]Campanelli07 Campanelli M., Lousto C.O., Zlochower Y. & Merritt D., 2007, PhRvL, 98, 231102 [Cole et al. (2001)]Cole01 Cole S. et al., 2001, MNRAS, 326, 255 [Drake et al. (2009)]Drake09 Drake A.J. et al., 2009, ApJ, 696, 870 [deMink & Mandel (2016)]deMink16 deMink S. & Mandel I., 2016, MNRAS, 460, 3545 [Fishbach et al. (2017)]Fishbach17 Fishbach M., Holz D.E. & Farr B., 2017, ApJ, 840, L24 [Gerosa et al. (2015)]Gerosa15 Gerosa D. et al., 2015, Phys Rev Lett, 115, 141102 [Gerosa & Berti (2017)]Gerosa17 Gerosa D. & Berti E., 2017, Phys Rev D (submitted), arXiv:1703.06223[Graham et al. (2017)]Graham17 Graham M. et al., 2017, MNRAS, submitted [Haehnelt & Rees (1993)]Haeh93 Haehnelt M.G. & Rees M., 1993, MNRAS, 263, 168 [Haiman et al. (2009)]Haiman09 Haiman Z., Kocsis B. & Menou K., 2009, ApJ, 700, 1952 [Ho (2008)]Ho08 Ho L.C., 2008, ARA&A, 46, 475[Hofmann et al. (2016)]Hofmann16 Hofmann F., Barausse E. & Rezzolla L., 2016, arXiv:1605.01938[Hopman & Alexander (2006)]HopTal06 Hopman C. & Alexander T., 2006, ApJ, 645, L133[Horn et al. (2012)]Horn12 Horn B., Lyra W., Mac Low M.-M. & Sándor Z., 2012, ApJ, 750, 34[Kennedy et al. (2016)]Kennedy16 Kennedy G. et al., 2016, MNRAS, 460, 240 [King & Nixon (2015)]King15 King A. & Nixon C.J., 2015, MNRAS, 453, L46 [Kocsis et al. (2011)]Kocsis11 Kocsis B., Yunes N. & Loeb A., 2011, PRD, 84, 024032 [Kroupa (2002)]kroupa Kroupa P. 2002, Science, 295, 82 [Lasota et al. (2016)]Lasota16 Lasota J.-P. et al., 2016, A&A, 587, 13 [Leigh et al. (2016)]leigh16 Leigh N. W. C., Antonini F., Stone N. C., Shara M. M., Merritt D. 2016, MNRAS, 463, 1605 [Leigh et al. (2017)]Leigh17 Leigh N. W. C., Geller, A. M., McKernan, B., Ford, K. E. S., Mac Low, M.-M., Bellovary, J., Haiman, Z., Lyra, W., Samsing, J., O'Dowd, M., Kocsis, B., Endlich, S. MNRAS, submitted (ArXiv:TBD) [Lousto et al. (2012)]Lousto12 Lousto C.O., Zlochower Y., Dotti M. & Volonteri M., 2012, PRD, 85, 084015 [McKernan & Yaqoob (1998)]McK98 McKernan B. & Yaqoob T.,1998, ApJ, 501, L29 [McKernan et al. (2011)]McK11 McKernan B. et al, 2011, MNRAS, 417, L103 [McKernan et al. (2012)]McK12 McKernan B., Ford K.E.S., Lyra W. & Perets H.B., 2012, MNRAS, 425, 460 [McKernan et al. (2013)]McK13 McKernan B., Ford K.E.S., Kocsis B. & Haiman Z., 2013, MNRAS, 432, 1468 [McKernan et al. (2014)]McK14 McKernan B., Ford K.E.S., Kocsis B., Lyra W. & Winter L.M., 2014, MNRAS, 441, 900 [McKernan & Ford (2015)]McK15 McKernan B. & Ford K.E.S., 2015, MNRAS, 452, L1 [Miller & Davies (2012)]miller12 Miller M. C., Davies M. B. 2012, ApJ, 755, 81[Merritt et al. (2004)]Merritt04 Merritt D., Milosavljević M., Favata M., Hughes S.A. & Holz D.E., 2004, ApJ, 607, L9 [Miralda-Escudé & Gould (2000)]Miralda00 Miralda-Escudé J. & Gould A., 2000, ApJ, 545, 847 [Morris (1993)]Morris93 Morris M., 1993, ApJ, 408, 496 [Narayan & Yi (1995)]Narayan95 Narayan R. & Yi I., 1995, ApJ, 444, 231 [O'Leary et al. (2009)]O'Leary09 O'Leary R.M., Kocsis B. & Loeb A., 2009, MNRAS, 395, 2127 [O'Shaugnessey, Gerosa & Wysocki (2017)]Oshaugh17 O'Shaughnessey R., Gerosa D. & Wysocki D., 2017, Phys. Rev. Lett, accepted, arXiv:1704.03879 [Paardekooper et al. (2010)]Paarde10 Paardekooper S.-J., Baruteau C., Crida A. & Kley W., 2010, MNRAS, 401, 1950 [Paczynski & Witta (1980)]PW80 Paczynski B. & Witta P.J., 1980, A&A, 88, 23[Paumard et al. (2006)]Paumard06 Paumard T. et al., 2006, ApJ, 643, 1011 [Portegies Zwart et al. (2006)]PortegiesZ06 Portegies Zwart S.F. et al., 2006, ApJ, 641, 319[Reines & Volonteri (2015)]Reines15 Reines A.E. & Volonteri M., 2015, ApJ, 813, 82[Rodriguez et al. (2016)]Rodriquez16 Rodriquez C. et al. 2016, arXiv. etc.[Samsing et al. (2014)]Samsing14 Samsing J., MacLeod M. & Ramirez-Ruiz E. 2014, ApJ, 784, 71[Sánchez-Salcedo & Chametla (2014)]Sanchez14 Sánchez-Salcedo F. J. & Chametla R.O. 2014, ApJ, 794, 167[Schawinski et al. (2015)]Schawinski15 Schawinski K., Koss M., Berney S.& Sartori L.F. 2015, ApJ, 451, 2517[Schnittman & Buonnano (2007)]SchnittBuon07 Schnittman J.D. & Buonanno A. 2007, ApJ, 662, L63[Sirko & Goodman (2003)]Sirko03 Sirko E. & Goodman J. 2003, MNRAS, 341, 501[Stahler (2010)]Stahler10 Stahler S.W., 2010, MNRAS, 402, 1758[Stone et al. (2017)]Stone17 Stone N.C. Metzger B.D. & Haiman Z., 2017, MNRAS, 464, 946[Thompson et al. (2005)]Thompson05 Thompson T.A., Quataert E. & Murray N. 2005, ApJ, 630, 167
http://arxiv.org/abs/1702.07818v2
{ "authors": [ "B. McKernan", "K. E. S. Ford", "J. Bellovary", "N. W. C. Leigh", "Z. Haiman", "B. Kocsis", "W. Lyra", "M. -M. MacLow", "B. Metzger", "M. O'Dowd", "S. Endlich", "D. J. Rosen" ], "categories": [ "astro-ph.HE", "astro-ph.GA", "gr-qc" ], "primary_category": "astro-ph.HE", "published": "20170225015707", "title": "On stellar-mass black hole mergers in AGN disks detectable with LIGO" }
label1, label2 ]Rohitash Chandra label3]Yew-Soon Onglabel3]Chi-Keong Goh[label1]Centre for Translational Data Science, The University ofSydney, Sydney, NSW 2006, Australia [label2]School of Geosciences, The University ofSydney, Sydney, NSW 2006, Australia[label3] Rolls Royce @NTU Corp Lab,Nanyang TechnologicalUniversity, 42 Nanyang View, SingaporeTime series prediction typicallyconsists of a data reconstruction phase where the time series is broken intooverlapping windows known as the timespan. The size of the timespancan beseen as a way of determining the extent of past information required for aneffective prediction. In certain applications such as the prediction of wind-intensity of storms and cyclones,predictionmodels need to be dynamic in accommodating different values of the timespan.These applications require robust predictionas soon as the event takes place. We identify a newcategory of problem called dynamic time seriesprediction that requires amodelto give prediction whenpresented with varying lengthsof the timespan.In this paper, wepropose a co-evolutionary multi-task learning method that provides a synergy betweenmulti-task learning and co-evolutionary algorithms to address dynamic timeseries prediction. The method features effective use of building blocksofknowledge inspired by dynamic programming and multi-task learning. Itenablesneuralnetworks to retainmodularityduring training formaking a decision in situations even whencertain inputs are missing.The effectiveness ofthe method is demonstrated usingone-step-ahead chaotic time series and tropical cyclonewind-intensity prediction. Coevolution; multi-task learning; modular neural networks; chaotic time series; anddynamic programming.§ INTRODUCTIONTime seriespredictiontypically involvesa pre-processing stage where the original timeseries is reconstructed into a state-space representation that is used asdataset for training models such as neural networks<cit.>. The reconstruction involvesbreakingthe time series using overlappingwindows known as timespan takenat regular intervals which defines the time lag <cit.>. Theoptimal values fortimespan and time lag are needed for effectiveprediction. These values varyon the type of problem and requirecostlycomputational evaluation for model selection; hence, some effort hasbeenmade to address this issue. Multi-objective and competitive coevolutionmethods have been used to take advantage of different features from thetimespanduring training<cit.>. Moreover, neural network have been used for determining optimal timespan of selectedtime seriesproblems <cit.>.Intime seriesfor naturaldisasters such as cyclones<cit.>, it isimportant to develop models that can make predictions dynamically, i.e. themodelhas the ability to make a prediction as soon as anyobservation or data isavailable. The minimal value for the timespan can have huge impactfor the case of cyclones, where data is onlyavailable every 6 hours<cit.>. A way to address such categoriesof problems isto deviserobust training algorithms and models that are capable of performing givendifferent types of input or subtasks.We definedynamictimeseries prediction as a problem that requires dynamic prediction given a set of input features that vary in size. Ithas been highlighted in recent work <cit.>that recurrentneural networks trained with a predefined timespan can onlygeneralise well for thesame timespanwhich makes dynamic time seriesprediction a challenging problem.Time series prediction problems can be generally characterised into three majortypes of problemsthat include one-step<cit.>, multi-step-ahead <cit.>, and multi-variate timeseries prediction <cit.>. These problemsat times may overlap with each other, for instance, a multi-step-aheadprediction can have a multi-variate component. Similarly, a one-step predictioncan also have a multi-variate component, or a one-step ahead prediction can beused for multi-step prediction and vice-versa.In this paper, we identifya special class of problems that require dynamic predictionwith the hope that the trained model can be useful for different instances ofthe problem. Multi-task learning employsshared representation knowledge for learning multipleinstances fromthe same problemwith the goal to develop models withimproved performance in decision making <cit.>.We note that different values in the timespan can beused to generate several distinct datasets that have overlapping features whichcanbeused to train modules for shared knowledge representation as neededfor multi-task learning. Hence, it is important to ensure that modularity isretained in such a way so that decision making can take placeeven when certain inputs are missing. Modular neural networks have beenmotivated from repeatingstructures in nature and appliedfor visual recognitiontasks <cit.>.Neuroevolution has been used to optimiseperformance and connectioncostsin modular neural networks <cit.> which also hasthepotential of learningnew tasks without forgetting old ones <cit.>. Thefeatures ofmodular learning provide motivation to be incorporated with multi-task learningfor dynamic time series prediction.In dynamic programming, alarge problem is broken down into sub-problems, from which at least one sub-problem is used as abuildingblockfor theoptimisation problem. Although dynamic programming has beenprimarily used for optimisation problems, ithas been briefly explored for data driven learning<cit.> <cit.>. The notion of usingsub-problems as building block in dynamic programming can be used indeveloping algorithms formulti-task learning. Cooperative coevolution (CC)is a divide and conquer approach that divides a problem into subcomponents thatare implemented as sub-populations <cit.>. CC has been effectivefor learning difficultproblems using neural networks <cit.>.Potter and De Jongdemonstrated that CCprovides morediverse solutions through the sub-populations when compared to conventionalevolutionary algorithms <cit.>. CC has been very effective fortraining recurrent neural networks for time series predictionproblems<cit.>.Although multi-task learning has mainly been used for machine learningproblems, the concept of shared knowledgerepresentation has motivated otherdomains.In theoptimisation literature,multi-task evolutionary algorithms have been proposedfor exploring and exploiting common knowledgebetween the tasks and enabling transfer of knowledge between them foroptimisation <cit.>. It was demonstrated thatknowledgefrom relatedtasks can help in speeding up the optimisation process andobtainbetterquality solutions when compared to conventional (single-taskoptimisation) approaches. Evolutionary multi-task learning hasbeen used for efficiently training feedforward neural networks for n-bitparity problem <cit.>, where different subtasks wereimplemented asdifferent topologiesthat obtainedimproved trainingperformance. In the literature, synergyofdynamic programming,multi-task learning and neuroevolution has not beenexplored.Ensemblelearning methods would be able to addressdynamic time seriesto an extent,where an ensemble is defined by the timespan ofthe time series. Howsoever, it would not have the feature of shared knowledgerepresentation that is provided through multi-task learning. Moreover, there isa need for aunifiedmodel for dynamic times series problemsdue to problems that require dynamic prediction.In this paper, we propose a co-evolutionary multi-tasking method that provides a synergy betweenmulti-task learning, dynamic programming and coevolutionary algorithms. Themethod enables neural networks tobe trained by featuring shared and modular knowledge representationin orderto make predictions given limited input features.This enables the learning process to employ modules of knowledge fromtherelated subtasks as building blocks of knowledge for aunifiedmodel. Theproposed methodis used for one-step-ahead chaotic time series problems using feedforwardneuralnetworks forbenchmark problems.Themethodis alsoused fortropical cyclonewind-intensity prediction and addresses the problem of minimal timespan where dynamic prediction is required.The paper extends results presented in <cit.>. The rest of the paper is organised as follows. Section 2 gives a background onmulti-task learning, cooperative neuro-evolution, and time series prediction. Section 3 givesdetails of the co-evolutionary multi-task learning method for dynamic timeseries prediction.Section 4 presents the results with discussion and Section 5 presents the conclusions and directions for future research. § BACKGROUND AND RELATED WORK§.§ Multi-task learning and applications A number of approaches have been presented thatconsiders multi-task learning<cit.> for different types ofproblems that includesupervised and unsupervised learning <cit.>.Themajor approach to address negative transferfor multi-task learning has beenthrough task grouping whereknowledge transfer is performed only within each group<cit.>. Bakker etal. for instance, presented aBayesian approach in which some of the model parameters were shared andothers loosely connected through a joint prior distribution learnt from the data<cit.>.Zhang and Yeung presented a convex formulation formulti-task metric learning by modeling the task relationships in the form of atask covariance matrix <cit.>.Moreover, Zhong etal.presentedflexiblemulti-task learning framework to identify latent grouping structures in order to restrict negative knowledgetransfer <cit.>. Multi-task learninghas recently contributed to anumber of successfulreal-world applications that gained better performance by exploiting sharedknowledge for multi-task formulation.Some of these applicationsinclude 1)multi-taskapproach for “ retweet” prediction behaviour of individualusers <cit.>, 2) recognition of facial action units <cit.>, 3) automated HumanEpithelial Type 2(HEp-2) cellclassification <cit.>, 4) kin-relationship verification usingvisualfeatures<cit.> and 5)object tracking<cit.>. §.§ Cooperative Neuro-evolution Neuro-evolutionemploys evolutionary algorithms for trainingneural networks <cit.> which can be classifiedinto direct<cit.>and indirect encoding strategies<cit.>.In directencoding, every connection and neuron is specified directly and explicitly in the genotype<cit.>.Inindirect encoding, the genotype specifies rules or some other structure for generatingthe network <cit.>. Performance of direct and indirect encodings varies forspecific problems.Indirect encodings seem very intuitive and have biologicalmotivations, however, inseveral cases they have shown not to outperform direct encoding strategies<cit.>. Cooperative coevolution for training neural networks is known as cooperativeneuroevolution <cit.>. Althoughcooperative coevolutionfaced challenges in problem decomposition,it showed promising featuresthat included modularity and diversity<cit.>. Further challengeshave been inarea of credit assignmentfor subcomponents <cit.>,problem decomposition, andadaptation due to problem ofseparability that refer to groupinginteracting or highly correlated variables <cit.>. Incooperative neuro-evolution, problem decomposition has a major effect in the training and generalisation performance. Althoughseveral decomposition strategies have been implemented that vary for differentnetwork architectures, the two establisheddecomposition methods arethose on the synapse<cit.>and neuron level <cit.>. In synapse level, thenetwork isdecomposed to its lowestlevel where each weight connection (synapse) forms asubcomponent <cit.>. In neuron level,the neurons in the network act as the referencepoint for each subcomponent <cit.>. Theyhave shown goodperformance in pattern classification problems <cit.>. Synapse leveldecomposition has shown goodperformance in controland time series predictionproblems<cit.>, however, they gavepoorperformance for pattern classification problems<cit.>. Chandraetal.applied neural and synapse level decompositionfor chaotic time seriesproblems using recurrent neural networks<cit.>. Hence, it was established that synapse level encodingwasmore effectivefor time series and control problems<cit.>.Chandra later presentedcompetition and collaboration with neuron and synapse decomposition strategies during evolution which improved the performance further <cit.>.In Algorithm <ref>, the network is decomposed according tothe selecteddecomposition method. Neuron level decomposition is shownin Figure <ref>. Once the decomposition is done, the subcomponents thatare implemented as sub-populations are initialized and evolved in a round-robinfashion, typically for a fixed depth of search given by generations.Theevaluation of the fitness of each individual for a particular sub-population isdone cooperatively by concatenating the current individual with the fittestindividuals from the rest of the sub-populations<cit.>. The concatenated individualis thenencodedinto the neuralnetwork where its fitness is evaluated and returned. Although it is arepresentative fitness, thefitness of the entire network is assigned to the particularindividual of the sub-population. This is further illustratedin Figure<ref>. §.§ Dynamic programming and reinforcement learning Dynamic programming (also known as dynamic optimisation) is aoptimisation strategy that considers breaking a large problem into sub-problemsand using their solution as a building block to solve thebigger problem <cit.>.By simply usingpreviouslycomputed solution takenfrom a sub-problem, the paradigm improves thecomputation time andalsobecomes efficient in memory orstorage. Although dynamic programming has typically been an approach for optimisationand sequential problems<cit.>, it has been well used in theareas of machine learning (such as spoken word recognition<cit.>),andcomputer vision (such asvariational problems <cit.>).Reinforcement learning on the other hand,considers agentsthattake actions in an environment to maximise the notion of cumulativereward <cit.>. Reinforcement learning has a wide rangeof multi-disciplinary applicationssuch as game theory, control theory, and operations research <cit.>. In the operationsresearch and control literature, reinforcement learning is called approximatedynamic programming <cit.>, or neuro-dynamic programming<cit.>. They combine ideas from the fields of neural networks, cognitive science and approximation theory. In machine learning, theenvironment is typically formulated as a Markov decision process (MDP) wherethe outcomes are partly random andpartly under the control of a decisionmaker. Opposed to classical dynamic programming,reinforcement learningdoes not assume knowledge of an exact mathematical model of the MDP.Recent applicationvia deep learningconsiders learning of policies directly from high-dimensional sensoryinputsin the challenging domain of classic Atari 2600 games<cit.>.Evolutionary algorithms have been proposed asa method for reinforcement learning<cit.>. Reinforcement learningvia evolutionary algorithmshave been implemented as neuroevolution<cit.> for neural networkswith application forplaying the game of 'Go'. Reinforcement learningmore recently has been implemented with co-evolutionary algorithmsin aclassicalcontrol problemthat considers balancingdouble inverted poles <cit.>. This further motivates the methodology presentedin this paper that provides a synergy between, dynamic programming,reinforcement learning and neuroevolution for co-evolutionary multi-tasklearning. §.§ Machine learning and optimisationEssentially, machine learningalgorithms have three components that include representation, evaluation and optimisation. Representation is done in theinitial stage when the problem is defined and form formulated. Representationconsiders the type of problem (classification, regression or prediction) andtheevaluationmetrics such as squared error loss and classification performance.Representation also considers initialisation of the parameters, such as theweights of the neural networksand hyper parameters such as the learning rate.In the case of cooperative neuroevolution, the representation component wouldconsider the encoding of the network weights into the subpopulations andinitialising them for evolution.Evaluation and optimisation are componentsthatiterate over time until a certain condition is met.Machine learning can alsobe seen as a data driven optimisation process. The learning procedure can beseen assolving a core optimisation problem thatoptimises the variables or parameters of the model with respect to the givenlossfunction. Evolutionary algorithms are typically considered as optimisationmethods and their synergy with neural networks into neuroevolution can beviewedas a learning procedure. In this paper,learning in neural networks isimplemented using co-evolutionary algorithms thatfeatures elements frommulti-task learning and dynamic programming. Moreover, learning is alsoreferredto evolution in the context of neuroevolution. Bennett andParrado-Hernándezin an introductory note to a special issue of a journal mentionedthatoptimisation problems lie at the heart of most machine learning approaches<cit.>. They highlighted the need for dealing withuncertainty, convex models, hyper-parameters,and hybrid approaches ofoptimisation methods for learning. Furthermore, Guillory et al. showed that online active learning algorithms can be viewed as stochastic gradient descenton non-convex objective functions <cit.>. §.§ Problems in time series prediction Although anumber of methods have been used for one-step aheadprediction,neural networks have givenpromising results with different architectures<cit.> and algorithms that includegradient-based learning <cit.>, evolutionaryalgorithms <cit.>,and hybridlearning methods<cit.>. These methods can also be usedfor multi-step ahead and multivariate time series prediction. Multi-step-ahead (MSA) prediction refers tothe forecasting orprediction of asequence offuture values fromobserved trend in a time series<cit.>.It is challenging to develop models thatproduce low prediction error as the prediction horizon increases <cit.>. MSA prediction has been approached mostly with the recursive and directstrategies. In the recursive strategy, the prediction froma one-step-ahead prediction model is used as input for future prediction horizon <cit.>. Althoughrelatively new, a third strategy is a combination of these approaches<cit.>.Multi-variate time series prediction typicallyinvolves the prediction ofsingle or multiple values from multi-variate input thataretypicallyinterconnected through some event<cit.>. Examples of single valuepredictionare theprediction of flourprices of time series obtained fromdifferent cities <cit.> andtraffic timeseries <cit.>. The goal in this caseisto enhancethe prediction performance from theadditional features in the input,although the problem can be solved in a univariate approach <cit.>. Inthe case of prediction of multiple values, the model needs to predict futurevalues of the different features, for example, prediction oflatitude and longitude that defines the movement of cyclones<cit.>. A recent study has shown thatmultivariateprediction would performbetter than univariate for MSAas the predictionhorizon becomes larger, multi-variate informationbecomes more important<cit.>. Another areaof problems intime seriesprediction consist ofapplications that have missing data.Wu etal. approachedthemissing data problem in time series with non-linear filters and neural networks <cit.>.In their method, a sequence of independentBernoulli random variableswere usedto model random interruptions which waslaterused to construct the state-space vector in pre-processing stage. Furthermore, novel approachesthat featurea synergy ofdifferent methodologieshave recently been presented to address time seriesprediction. Extremevalueanalysis considers theextreme deviationsfrom the median of probability distributions, which has been beneficial fortime series prediction in the past <cit.>. D'Urso et al.explored the groupingof time series with similar seasonal patterns using extreme valueanalysis with fuzzy clustering with an application dailysea-level time series in Australia <cit.>. Chouikhi etal. presented echo state networks for time series predictionwhere particle swarm optimised was used to optimise theuntrained weightsthat gaveenhancementtolearning<cit.>.Such approaches give motivations for developing a synergy of differentmethods in order to utilise their strengths and eliminate their weaknesses. § CO-EVOLUTIONARY MULTI-TASK LEARNING §.§ Preliminaries: time series reconstruction State-space reconstruction considers the use of Taken's theoremwhich expresses that the state-space vectorreproducesimportant characteristics of the original time series<cit.>. Given anobserved time series x(t), an embeddedstate space Y(t) = [(x(t), x(t-T),..., x(t - (D-1)T)] can be generated, where, T is the time delay, Dis the timespan (also known as embedding dimension), t= (D-1)T,DT, ..., N-1,and N isthelengthof the original time series.The optimal values for D and T must be chosen in order to efficiently apply Taken's theorem<cit.>. Taken's proved that if the original attractor is of dimension d, then D = 2d+1 will be sufficient to reconstruct the attractor <cit.>. In the case of using feedforward neuralnetworks, D isthe number of input neurons. §.§ Dynamic time series prediction Natural disasters such as torrential rainfall, cyclones, tornadoes,wave surges anddroughts <cit.>requiredynamic and robustprediction models thatcan make a decision as soon astheevent take place. Therefore,if the model is trained over specific months for rainy seasons, the system should be able to make arobust prediction from the beginning of the rainy season. We define the event length as the duration of an event which can be number of hours of acyclone or number of days of drought or torrentialrain.As noted earlier, ina typical time series prediction problem, theoriginal time series is reconstructed using Taken's theorem<cit.>. In thecase of cyclones, it is important tomeasurethe performance of themodelwhen dynamic prediction is needed regarding track, wind or othercharacteristics of the cyclone <cit.>. Dynamic prediction can provide early warnings to the community at risk.For instance, data abouttropical cyclone in the South Pacific is recorded at six hour intervals<cit.>. If the timespan D = 6, the first predictionby the model at hand would come after 36 hours which could have devastatingeffects. Theproblem ariseswhenthe gap between each data point in the times seriesis a day or number of hours. The problem with the existing models such asneuralnetworks used for cyclonesis the minimal timespan D needed to make a prediction. It has beenreported that recurrentneuralnetworks trained with a given timespan (e.g. D=5), cannot makerobust prediction for other timespan ( e.g. D=7 or D=3 ) <cit.>. Therefore, we introduce and definethe problemofdynamic timeseriesprediction that refers to the ability of a model to give aprediction given a set of timespan values rather than a single one.Thisenables the model to make decision with minimum value of the timespan in caseswhen rest of features of data-points are not available. A conventionalone-step ahead time seriesprediction can be given byx = x[t], x[t- 1], ..., x[t - D]xˆ[t+1]= f(x) where f(.) is a model such as a feedforward neural network and D is a fixedvalue for thetimespan andx refers to the input features. In thecaseofdynamic time series, rather than a single value, we consider a set of valuesfor the timespanΩ_m =[D_1, D_2,...D_M] where M is the number of subtasks, given M ≤ D. Hence,the input features for each subtask in dynamic time series prediction can begiven by Ψ_m, where m= 1, 2, ..., M. Ψ_m= x[t], x[t- 1], ..., x[t -Ω_m] §.§ MethodIn the proposed method, a co-evolutionary algorithm based on a dynamicprogrammingstrategy is proposed for multi-task learning. It featuresproblemdecomposition in asimilar way as cooperative coevolution, however, the majordifference lies in the way the solutions of the subcomponents are combined to build to the final solution. Hence, the proposed co-evolutionary multi-tasklearning algorithm is inspired from the strategies used in dynamic programmingwhere a subset of the solution is used as the main building block for theoptimisation problem. In this case, the problem islearningthe weights ofa cascaded neural network architecture wherethe base problem is the networkmodule that is defined by lowest number of input featuresandhidden neurons.The weights in the basenetwork are part of larger cascaded network modules that consist of additionalhidden neurons and input features. This can be viewed as modules of knowledgethat are combined for larger subtasks that use knowledge from smaller subtasksasbuilding blocks. The cascaded network architecture can also be viewed asanensemble of neural networks that feature distinct topologies in terms of numberinput and hidden neurons as shown in Figure <ref>. Suppose that werefer to a modulein the cascaded ensemble, there areM moduleswithinput i, hidden hlayersas shown.I = [i_1, i_2, ...,i_M] H= [h_1, h_2, ...,h_M]O= [1, 1, ..., 1] where I, H, and O contain the set of input, hidden andoutput layers.Note that theapproach considers fixed number of output neurons. Sincewe consider one-step-ahead time series problem, one neuron in output layer isused for all the respective modules. The input for each of the modules is givenby the dynamic nature of the problem that considers different lengths oftimespan that constructs an input vectorΨ_m for the given module asfollows.Ψ_m = x[t], x[t- 1], ..., x[t - I_m] Note that the input-hidden layer ω_m weightsand thehidden-output layer υ_m weights are combined for the respective modulem. The base knowledge module is given as Φ_1= [ω_1, υ_1].Thesubtaskθ_mis defined as theproblem of training therespective knowledgemodules Φ_mwith given input Ψ_m. NotethatFigure<ref>explicitly shows the knowledge modules of the network for ω_2 and ω_1,respectively. Theknowledge module for eachsubtask is constructed in acascaded network architectureas follows.Φ_1 = [ω_1, υ_1]; θ_1=(Φ_1) Φ_2 =[ ω_2,υ_2 ];θ_2 = [θ_1,Φ_2]⋮ Φ_M=[ ω_M,υ_M ];θ_M=[θ_M-1,Φ_M]The vector of knowledge modules considered for training or optimisationistherefore Φ=(Φ_1,…,Φ_M). y_1= f(θ_1, Ψ_1 ) y_2= f(θ_2, Ψ_2 ) ⋮y_M= f(θ_M, Ψ_M )Given T samples of data, the loss L for sample t can be calculated byroot mean squared error.L_t= √(1/M∑_m=1^M(ŷ-y_m)^2) where ŷ is the observed time series and y_m is theprediction given by subtask m. The loss E for the entiredataset (all subtasks) is given byE= 1/T∑_t=1^T L_t The training of the cascaded network architecture involves decompositionas subtasks throughco-evolutionarymulti-task learning(CMTL) algorithm.The knowledge modules in subtasksdenotedby Φ_m are implemented assubcomponentsS_1,S_2, ..S_M, where M is number ofsubtasks. The subcomponents are implemented assub-populations consist of matrix of variablesthat featurethe weights and biases S_m = Π_i,j, where i referstoweights and biasesand j refers to theindividuals. The individuals of the sub-populationsarereferred as genotype whilethecorresponding network module are referred asthe phenotype. Unlike conventionaltransfer learning methods,the transferof knowledge here is done implicitly through the sub-populations in CMTL. Theadditional subtasks are implemented through the cascades that utilise knowledgefrom the base subtask.The fitness of the cascade is evaluated by utilisingthe knowledge from the base subtask. This is done through CMTL where the bestsolution from the sub-population of the base subtask is concatenated with thecurrent individual from the sub-population whose fitness needs to be evaluated.This is how transfer of knowledge is implicitly done through co-evolutionarymulti-task learning. Algorithm <ref> gives details for CMTL which begins byinitialisingthe the sub-populationsdefined by the subtasks which featurethe knowledge modules Φ_m andrespective subtask input features Ψ_m. The sub-populations areinitialisedwith realvalues[-α,α] drawn from uniform distribution whereα defines the range. Once this has been done, the algorithm moves intotheevolution phase where each subtask is evolved for a fixed number forgenerationsdefined by depth of search, β. The majorconcern here is theway the phenotype is mapped into genotype where a group of weight matricesgiven by Φ_m=[ω_m, υ_m] that makes up subtask θ_m areconverted into vector X_m. Stage 1 in Algorithm <ref> implements the use ofknowledge fromprevious subtasks through multi-task learning.In the case if thesubtask isa base problem (m==1), then the subtask solution X_m is utilised in a conventional matterwhere knowledge from other subtasks or modules are not required to reach adecision. Howsoever, giventhat the subtask is not a base problem, thecurrentsubtask individual X_m is appended with best individuals from the previous subtasks, therefore, X_m = [B_1,...,B_m-1, V_m],where B is the best individual from previous subtask and V is the currentindividual that needs to be evaluated. This will encode X_m into knowledge modules Φ=(Φ_1,…,Φ_M) for the respective subtasks. Thealgorithm then calculates subtask network output or predictiony_m = f(θ_m, Ψ_m)and evaluatetheindividual though the loss functionE= 1/T∑_t=1^T L_t, where L_tis given in Equation <ref>. The subtasksolution is passed to Algorithm <ref> along with the network topologyin order to decode the subtasksolutioninto therespective weights of the network. This could be seen as the process of genotype to phenotype mapping.This procedure is executed for everyindividualin the sub-population.Thisprocedureisrepeated for every sub-population for different phases until the terminationconditionissatisfied. The termination condition can be either the maximum number offunction evaluations or a minimum fitness value from the training or validationdataset. Figure <ref> shows an exploded view of the neuralnetwork topologies associated with the respectivesubtasks, however, they arepart of the cascadednetwork architecturelater shown in Figure<ref>. The way the subtask solution is decomposed and mapped into the network is given in Figure <ref> and discussed detail in the next section.The major difference in the implementation of CMTL when compared to conventionalcooperative neuroevolution(Algorithm <ref>) is bythe way the problem is decomposed and theway the fitness for each individual is calculated. CMTL is motivated by dynamicprogramming approach where the best solution from previous sub-populations areused for cooperative fitness evaluationfor individuals in the current sub-population.Howsoever, the current sub-populationdoes not use thebest solution from future subpopulations. This way, the concept of utilising knowledge from previous subtasks as building blocks is implemented . On theother hand, cooperative neuroevolution follows a divide and conquer approachwhere at any givensubpopulation, in order to evaluate the individuals,thebest individuals from the rest of the sub-populations are taken into account.Finally, when the termination criteria has been met, the algorithm moves intothe testing phase where the best solutions from all the different subtasks aresaved and encoded into their respective network topologies. Once this is done,the respective subtask test data is loaded and the network makes predictionthat is evaluated with loss E given in Equation <ref>. Other measure of error can also beimplemented. Hence, we have highlighted the association ofevery individual in the respective sub-populationswithdifferent subtasks inthe multi-task learningenvironment. There is transfer of knowledge in termsof weights from smaller to bigger networks as defined by the subtask with itsdatawhich is covered in detail in next section.A Matlab implementation ofthis algorithm with respective datasets used for the experimentsisgiven online[https://github.com/rohitash-chandra/CMTL_dynamictimeseries https://github.com/rohitash-chandra/CMTL_dynamictimeseries]. §.§ Transfer of knowledge One challenging aspect of the Algorithm <ref> is the transfer ofknowledge represented by theweights of the respective neural networks that islearntby the different subtasks in CMTL.The cascading network architecture increase in terms of the input and hidden neuronswith the subtasks.Algorithm <ref>implements transfer of knowledge given the changes ofthearchitecture bythe different subtasks. The goalis to transferweightsthat are mapped from different sub-populations defined by the subtasks.Thealgorithmisgiveninput parameters which are* The reference to subtask m;* The current subtask solution; if m is base task,X_m = [V_m]elseX_m = [B_1,..., B_m-1, V_m ], otherwise.* The topology of the respective cascaded neural modulefor thedifferent subtasks in terms of number ofinput, hidden, and output neurons. We describe the algorithm with reference to Figure <ref>which shows a case, where the subtask m = 3 goes through thetransfer wherem = 1 and m = 2 are used as building blocks ofknowledgegivenin the weights. Therefore, we use examples for the network topologyas highlighted below. * I_m is vector of number of input neurons for the respective subtasks, eg. I = [2, 3, 4 ] ;* H_m is vector of number of input neurons forthe respectivesubtasks,eg. H_ = [2, 3, 4];* O_m is vector ofoutput neurons for the respective subtasks,eg.O = [1, 1, 1]. The algorithm begins by assigning base case,b = 1 which is applied irrespective of the number of subtasks. In Step 1, the transferofInput-Hidden layer weightsis shown by weights (1-4)in Figure<ref>.Step 2executes the transfer for Hidden-Output layer weightsas shown by weights (5-6)inFigure<ref>. Note that Step 1 and 2 are applied for all the cases given bythe number of subtasks. Once this is done, the algorithm terminates if m =1 or proceeds if m >= 2. Moving on, in Step 3, thecase is more complex as we consider m >= 2. Step 1and2 are executed before moving to Step 3 where X contains theappendedsolution sets from previous subtasks. InStep 3,t in principle points to thebeginning of the solution given by sub-population for m = 2. Here, the transfer forInput-Hidden layer weights(7-9)is executedfor m = 2. Note that in this case, we begin with the weights with referenceto the number of hidden neurons from previous subtask j= H_(m -1)+ 1,and move to the number of hidden neurons ofthe current subtask j=H_(m)in order to transfer the weights to all the inputneurons. This refers toweights(7-9) in Figure <ref>.Beforereachingtransfer for m = 3, m = 1 and m = 2transferwould have already taken place and hence the weights (13-16)would betransferred as shown in the same figure. Moving on to Step 4,we firstconsider the transfer forInput-Hiddenlayerweightsfor m = 2through the transfer of weightsfrom beginning ofprevious subtask input,i=I_(m-1)+1to current subtaskinput connected with all hidden neurons. This is given by weights(10-11) inFigure <ref>. For the case of m = 3, this would refer toweights (17-19) in the same figure.Finally, in Step 5, the algorithm executes the transfer for Hidden-Output layer weights based on thehidden neurons fromprevious subtask.Incase of m = 2,this results in transferring weight (12) andform =3, the transfer isweight (20) in Figure <ref>,respectively. Note that the algorithm can transfer any number of input andhidden neurons asthenumber ofsubtasksincrease.The time complexity of CMTL considers the timetaken for transfer of solutions for different number of subtasks. We note that the best case is when the subtask is the base subtask (m = 1).Therefore, the worst case time complexity can begiven by T(m<=1)= O(1)T(m)= T(m-1) + T(m-2), ...,+ O(1) T(m)= O(2^m) where m refers to the subtasks.§ SIMULATION AND ANALYSIS This section presents an experimental study that compares the performance ofCMTL with conventional neuroevolution methods.The results are comparedwithneuroevolution via evolutionary algorithm (EA)andcooperative neuroevolution (CNE)for benchmark time series predictionproblems. Furthermore,tropicalcyclones from South Pacific and South Indian Ocean are consideredto addressthe minimal timespan issue<cit.> using dynamictime series prediction.§.§ Benchmark Chaotic Time Series ProblemsIn the benchmark chaotic time series problems, the Mackey-Glass, Lorenz, Henon and Rossler are the four synthetic time series problems. The experiments usethe chaotic time series with length of1000 generated by the respective chaotic attractor.The first 500 samples areused for training and the remainingfor testing. Inall cases, the phase space of the original time seriesis reconstructedwith the timespan for 3 datasets for therespective subtasks with the set of timespan Ω = [3, 5, 7] and timelag T = 2. All the synthetic and real-worldtime series werescaled in the range [0,1]. Further details of each of the time series problemis given as follows. TheMackey-Glass time serieshas been used in literature as abenchmark problem due to its chaotic nature <cit.>. TheLorenz time serieswas introduced by Edward Lorenzwho has extensively contributed to the establishment of Chaos theory<cit.>. The Henon time seriesis generated with a Henonmap which is a discrete-time dynamical systemthat exhibit chaotic behaviour<cit.> and the Rossler time seriesis generated using theattractor for the Rossler system, a system of three non-linear ordinarydifferential equations as given in <cit.>. The real-world problemarethe Sunspot, ACIfinanceand Laser time series. The Sunspot time series is a goodindication of the solar activities for solar cycles which impacts Earth'sclimate, weather patterns, satellite and space missions <cit.>.The Sunspot time series from November 1834 to June 2001 is selected whichconsists of 2000 points. The ACI financial time seriesis obtainedfrom the ACI Worldwide Inc.which is one of the companies listed on the NASDAQ stock exchange.The data set contains closing stock prices from December 2006 to February 2010, which is equivalent to approximately 800 data points.The closing stock prices were normalised between 0 and 1. The data set featuresthe recession that hit the U.S. market in 2008<cit.>. The Laser time seriesis measured in aphysics laboratory experiment that were used in the Santa FeCompetition <cit.>. All the real world time series used the first50 percent samples for training and remaining for testing. §.§ Cyclone time series The Southern Hemisphere tropical cyclone best-track data fromJoint TyphoonWarning Centrerecorded every 6-hours isused as the main source of data<cit.>.We consider only the austral summer tropicalcyclone season(November to April) from 1985 to 2013 datain the current study as data priorto the satellite era isnot reliable due to inconsistencies and missingvalues. The original data of tropical cyclone wind intensity in the SouthPacificwas divided into trainingand testing set as follows: * Training Set: Cyclones from 1985 - 2005 (219 Cyclones with 6837 datapoints)* TestingSet: Cyclones from 2006 - 2013(71 Cyclones with 2600 datapoints) In the case for South Indian Ocean, the details areas follows:* Training Set: Cyclones from 1985 - 2001 (285 Cyclones with 9365 datapoints)* TestingSet: Cyclones from 2002 - 2013(190 Cyclones with 8295 datapoints ) Although the cyclones are separate events, we choose tocombineall thecyclone datain a consecutive order as given by theirdate of occurrence. The time seriesis reconstructedwith the set oftimespan, Ω = [3, 5, 7], andtimelag T = 2. §.§ Experimental DesignWe note that multi-task learning approach usedfor dynamic time series can be formulated as a series of independent singletask learning approaches. Hence, for comparison of CMTL,we provideexperimentation and results with conventionalneuroevolution methods thatcan be considered as single-task learning (CNEand EA). In the case of CNE, neuron level problem decompositionis applied for trainingfeedforward networks <cit.>. We employcovariance matrix adaptation evolutionstrategies (CMAES)<cit.> as the evolutionary algorithm in sub-populations of CMTL,CNE and the population of EA.Thetraining and generalisation performances are reported for each case given by the different subtasks in therespective time series problems. The respective neuralnetworks used bothsigmoid units in the hidden andoutput layerfor all the different problems. The loss function given in Equation<ref>is used as the main performance measure.Each neuralnetwork architecture was tested with different numbers of hidden neurons.We employ fixed depth = 5generation in the sub-populations of CMTL as thedepth of search as it has givenoptimal performance in trial runs. CNE also employsthe same value. Note that all the sub-populations evolve for the same depthof search. The population size of CMAES in the respective methods is givenby P = 4+ floor*(3*log(W)), where W is the total number of weights andbiases for theentire neural networkthat includes all the subtasks (CMTL). In the case of EAand CNE,W is total number of weights and biases for the given networkarchitecture.Thetermination condition is fixed at 30 000 function evaluations for each subtask,hence, CMTL employs 120 000 function evaluations whileconventional methods use 30 000 for each ofthe respective subtasks for all the problems. Note thatsince there is a fixed training time, there was no validation set used to stoptraining. The choice of the parameters such as the appropriatepopulation size, termination condition has been determinedin trialexperiments. The experiments are well aligned withexperimental settingfrom previouswork<cit.>.§.§ Results for Benchmark Problems The results for the 7 benchmark chaotic time series problems are given inFigure<ref> to <ref> which highlight the trainingandgeneralisation performance.We limit our discussion to thegeneralisation performance, although the training performance is alsoshown. Figure <ref> shows that CMTL generalisationperformance isbetterthanEA and CNE, whileCMTL and EA outperform CNE in all the subtasksdenoted by the timespan. The same trendis shown in generalfor Lorenz and Henon time series as shown in Figure<ref> and Figure <ref>, respectively. There is oneexception (D=5), for the Henon problemwhere CME gives better performancethanEA, however, worse than CMTL. Figure <ref> shows the results forthe Rossler time series which follows a similar trend when compared to theprevious problems. Hence, in general,CMTLgeneralisation performance is the best when compared to the conventional methods (CNE and EA) for the 4 synthetic time series problems which have little or no noise present. In the case of real-world problems, Figure <ref> forthe Sunspot problem shows that CMTL provides the best generalisationperformance when compared to EA and CNE for all the cases. The sameis given for first two timespan cases for ACI-finance problem as shown inFigure <ref>, except for one case(D=7), where EAand CMTL givesthe same performance. In the case of the Laser time series in Figure<ref>, which is known as one of the most chaotic time seriesproblems, CMTL outperforms CNE and EA, except for one case, D=7. Therefore,at thisstage,we can conclude that CMTL gives the best performance for most of the cases inthe real-world time series problems. Table <ref> shows the mean ofRMSE and confidence intervalacrossthe 3 timespan. We find that the CMTL performs better than EAand CNE for almostall the problems. TheLaser problem isthe onlyexception where the EA is slightly better than CMTL.§.§ Results for Tropical Cyclones We present the results for the performance of the given methods on thetwo selected cyclone problems, which featuresSouth Pacific and South Indianocean as shown in Figure<ref> and Figure <ref>,respectively. Figures <ref> and <ref> show theprediction performance of a typical experimental run.In the case of the South Pacific ocean, the results show thatCMTL provides thebest generalisationperformance when compared to CNE and EA. This isalso observed for the South Indian ocean. §.§ Discussion The goal of the experiments was to evaluate ifCMTL can maintain quality inprediction performance when compared to conventional methods, while at the sametime address dynamic time series problems. The results have shown that CMTLnotonly addresses dynamic time series but is away to improve the performance ifeach of the subtasks in multi-task learning was decomposed and approached assingle-task learning. The incremental learning in CMTL not only improves theprediction performance but also ensures modularity in the organisation ofknowledge. Modularity is an important attribute for addressing dynamicprediction problems since groups of knowledge can be combined to make a decisionwhenthe nature or complexity of the problem increases. Modularity is importantfor design of neural network in hardware<cit.> asdisruptions in certain synapse(s) can result in problems with the whole networkwhich can be eliminated by persevering knowledge as modules <cit.>.It is noteworthy that CMTL gives consistent performance even for the cases whenthe problem is harder as in the case of smaller or foundational subtasks thathave minimal timespan.Learning smaller networks could be harder since theyhave limited information about the past behaviour of the time series. The waythe algorithm handles this issueis with the refinement of the solutions (in around-robin manner through coevolution) after it has been transferred to alarger network. When more information is presented as the subtask increases,CMTL tends to refine the knowledge in the smaller subtasks. In this way, theresults show that the performance is consistent given the small subtasks andwhen they increase depending on the timespan. CMTLcan be seenas a flexible method for datasets that have different features of which somehave properties so that they can be grouped together as subtasks.Throughmulti-task learning, the overlapping features can be used as building blocks tolearnthe nature of the problem through the model at hand. Although feedforwardneural networks have been used in CMTL, other other neural networkarchitectures and learning models can be used depending on the nature ofthe subtasks. In case ofcomputer vision applications such as face recognition, the different subtaskscanbe different number of features; i.e.the algorithmcan executefacerecognition based on minimal features from a set of features. CMTL couldalso be viewed as training cascadednetworksusing a dynamic programming approach where each cascade defines asubtask. Although thedepthof the cascading does not have a limit,adding cascadescould result inan exponential increase of training time.The depth of the cascadedarchitecture wouldbe dependent on the application problem. It depends on the time series underconsideration and the level of inter-dependencies of the current andtheprevious time steps. In principle, one should stop adding cascades when theprediction performance begins to deteriorate to a given threshold.Therefore,for a given problem, there needs to be systematic approach that selects thenumberof subtasks for the cascaded architecture.We have experimentally tested robustness and scalabilityusing the synthetic and real-world datasets that includes benchmark problemsand an applicationthat considers the prediction of wind-intensity in tropicalcyclones. We provided comprehensive experimentationfor the algorithmconvergence given a range of initialconditions. These include different setofinitialisation of the subpopulations in CMTL with multiple experimental runs alongwith further reporting of the mean and confidence interval. We evaluated theprediction capability given different instances of the timespandefined as subtasks in CMTL and compared the performance with standalonemethods. The experimental designconsidered multiples experiment runs, different and distinct datasets,difference in size of the datasets, and different sets of initialisation in thesubpopulations.In this way, we have addressed robustness and convergence of the proposedmethod, experimentally.In the comparison of CMTLwith CNE, we observed that CMTL creates ahigher time complexity since it has an additional step of transfer of solutionsfrom the different subtasks encoded in the subpopulations. The time taken wouldincrease exponentially as the number of subtasks increases. This would add tocost of utilising solutions from other subtasks given a fixed convergencecriteria defined by the number of function evaluations. In case if theconvergencecriteria is defined by a minimum error or loss, it is likely that the solutionsfrom the previous subtasks will help in faster convergence. In terms ofscalability, we note that neuroevolution methods have limitations due to slowconvergence of evolutionary algorithms. With help of gradient-based localsearchmethods, convergence ofCMTL can be improvedviamemeticalgorithms wherelocal refinement occurs during the evolution <cit.>. There is scope infuture work convergence proof used in standard evolutionary algorithms<cit.>that could further be extended for multiple sub-populationsin CMTL. § CONCLUSIONS AND FUTURE WORKWe presented a novel algorithm that provides a synergy between coevolutionaryalgorithmsandmulti-tasking for dynamic time series problems. CMTL can be used to train amodelthat can handle multipletimespan values that defines dynamic input features which provides dynamic prediction. CMTL has beenvery beneficialfortropicalcyclones where timely prediction needs to be made as soon as the event takesplace. The resultsshow that CMTLaddressestheproblem of dynamic time series and providesarobust way to improve theperformancewhen compared to relatedmethods. It is important to understand how CMTL achieved better results for whencomparedto relatedmethodsgiven the same neural network topology and data forrespective subtasks. CMTL can be seen as an incremental evolutionary learningmethod that features subtasks as building blocks of knowledge. The largersubtasks takeadvantage ofknowledgegainedfrom learning the smaller subtasks.Hence, there is diversity inincremental knowledge developmentfrom the base subtask which seems to be beneficial forfuture subtasks. However, the reason why the base subtask produces betterresultswhencompared toconventional learning can be explored with further analysisduring learning. The larger subtaskswithoverlapping features covers thebase subtask in a cascaded manner. Therefore,larger subtasks can be seen asthose that haveadditionalfeatures that guidelargernetworkwith more hidden neuronsduringtraining. Finally, CMTL is a novel approach that provides synergy ofawide range of fundamental methods that include, dynamic programming,reinforcement learning, multi-task learning,co-evolutionary algorithms and neuro-evolution. This makes the CMTL useful for some of the applications wherethe mentioned fundamental methods have been successful. Since reinforcementlearning has been utilised in deep learning, thenotion ofreuse of knowledge as building bocks by CMTL could be applicable in areas ofdeep learning. In future work, apart from feed-forward networks, the idea ofdynamic time series prediction that employs transfer and multi-task learningcould be extended to other other areas that have simpler model representation,suchas autoregressive models. Furthermore, CMTL can be used for other problems that can be broken into multiple subtasks,such asmultiple stepahead and multivariate time seriesprediction. Although the paper exploredtimespan for univariate time series, the approach could be extended to pattern classification problems that involves large scale features. It can beextended for heterogeneous pattern classification problems where the datasetcontainssamples that have missing values or features.CMTL can also beextended for transferlearning problems that can include both heterogeneous and homogeneous domainadaptation cases. In case of tropical cyclones, a multivariate approach can betaken wherethe different subtasks can be seen as features that includecyclone tracks, seas surface temperature, and humidity. § REFERENCES IEEEtran
http://arxiv.org/abs/1703.01887v2
{ "authors": [ "Rohitash Chandra", "Yew-Soon Ong", "Chi-Keong Goh" ], "categories": [ "cs.NE" ], "primary_category": "cs.NE", "published": "20170227021951", "title": "Co-evolutionary multi-task learning for dynamic time series prediction" }
http://arxiv.org/abs/1702.08251v1
{ "authors": [ "Thomas House" ], "categories": [ "stat.CO" ], "primary_category": "stat.CO", "published": "20170227121246", "title": "Hessian corrections to Hybrid Monte Carlo" }
Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing, 100190, P. R. China smeng@iphy.ac.cn Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing, 100190, P. R. China Collaborative Innovation Center of Quantum Matter, Beijing, 100190, P. R. China Ultrafast electronic dynamics in solids lies at the core of modern condensed matter and materials physics. To build up a practical ab initio method for studying solids under photoexcitation, we develop a momentum-resolved real-time time dependent density functional theory (rt-TDDFT) algorithm using numerical atomic basis, together with the implementation of both the length and vector gauge of the electromagnetic field. When applied to simulate elementary excitations in two-dimensional materials such as graphene, different excitation modes, only distinguishable in momentum space, are observed. The momentum-resolved rt-TDDFT is important and computationally efficient for the study of ultrafast dynamics in extended systems. Momentum-resolved TDDFT algorithm in atomic basis for real time tracking of electronic excitation Sheng Meng December 30, 2023 ===================================================================================================§ INTRODUCTIONReal-time (rt) time dependent density functional theory (TDDFT) is an efficient ab initio method to study electron dynamics in complex electron-nuclear systems in both the ground state and excited state. Compared with other widely used approaches such as frequency domain TDDFT, quasi-particle GW, and Bethe-Salpeter equations, rt-TDDFT has two major advantages: (i) Time-dependent Kohn-Sham (TDKS) equations in rt-TDDFT include all nonlinear effects and are intrinsically non-perturbative, making rt-TDDFT a better tool to describe materials in a strong field and (ii) rt-TDDFT directly provides complete information on real time evolution of electronic wavefunctions together with ionic movements, presenting a unique way for real-time tracking ultrafast dynamics and complex phenomena far from equilibrium. Thus, rt-TDDFT is a natural choice for the exploration of strong field physics and ultrafast phenomena. Motivated by the rapid developments in ultrafast experimental techniques, e.g., attosecond based spectroscopy <cit.>, ultrastrong laser sources <cit.> and free electron X-ray lasers <cit.>, rt-TDDFT is drawing more and more attention as a method to simulate ultrafast phenomena in the current line of research frontiers.Nevertheless, rt-TDDFT is not widely used as the method of choice in the literature, being much less popular than other density functional theory (DFT) based approaches such as ΔSCF, DFT+U, frequency-domain TDDFT, etc.Thus, numerical atomic orbitals (NAO) have been a common choice to dramatically reduce computation cost for simulating complex materials and have been widely used in DFT codes such as SIESTA <cit.> and OpenMX <cit.> and rt-TDDFT implementations by A. Tsolakidis <cit.> and X. Li <cit.>. The biggest advantage of using NAO is the extremely small computational cost. To describe a system with N_a atoms, only about 10 × N_a NAOs are required, while 10^3 - 10^4 × N_a real space grids or plane waves have to be invoked. In addition, with a relatively small real-space cutoff for NAOs, the order-N linear scaling with respect to system size can be achieved. Since a major difficulty in developing rt-TDDFT is its extreme time consumption due to the use of ultrasmall time step (on the order of ∼1 attosecond), NAO based 𝐤-resolved rt-TDDFT is very promising for simulating realistic condensed matter systems, complex materials, and interfaces with a long simulation time.Most previous rt-TDDFT investigations focus on the photoabsorption and related properties of finite-size zero-dimensional (0D) systems (atoms/molecules/nanoparticles) including optical spectra, <cit.>, excited state dynamics <cit.>, solvation effect <cit.>, relativistic effect variationally <cit.>, photochemical stability <cit.>, and recently plasmonic excitations <cit.>. In 0D systems, only single Γ point is needed in the reciprocal space sampling. Thus, the Γ-only algorithm is overwhelming, as commonly implemented and used in the majority of rt-TDDFT simulations. However, to study photoexcitation and electronic dynamics in extended systems, Γ-only k-point sampling is insufficient and momentum-resolved (𝐤-resolved) sampling in the reciprocal space is required.An important advantage of using 𝐤-resolved rt-TDDFT is computational efficiency. With Γ-only TDDFT, to get the accurate charge density and ionic forces, an extraordinary large supercell has to be invoked. Many previous studies on extended systems belong to this scenario, <cit.> including our recent studies on ultrafast electron-hole dynamics in dye-sensitized solar cells, <cit.> charge separation in van der Waals heterojunctions, <cit.> and nonthermal melting of silicon. <cit.> Using 𝐤-resolved algorithms, and at the same accuracy level, the supercell size as well as the computational cost, can be largely reduced, as will be demonstrated later. Besides technical advantages, 𝐤-resolved algorithm introduces the important 𝐤-space resolution and a new degree of freedom, which is essential to describe key quantities and important physics in condensed matter materials such as time-dependent band structures, quasiparticles, and valley dynamics. Only rt-TDDFT with 𝐤-resolved sampling can provide essential information concerning the real time evolution of material properties.Although 𝐤-resolved rt-TDDFT algorithms have been implemented by several groups <cit.> and applied for both semiconductors, <cit.> and metals, <cit.> these implementations employ either real space grids or planewaves as basis sets. With a much smaller basis set, the implementation of 𝐤-resolved rt-TDDFT algorithms with NAO basis has advantages in efficiency. To take the advantages of NAOs, a new framework and a more complicated implementation of rt-TDDFT are required.In this work, we strive to tackle the major challenges mentioned above in NAO-based rt-TDDFT. We have successfully developed the 𝐤-resolved rt-TDDFT algorithm based on local atomic basis sets using numerical atomic orbitals. Both the length and vector gauge of the electromagnetic field have been implemented. This approach enables rt-TDDFT calculations of solids and surfaces using rather simple unit cells, reducing computational cost by several orders of magnitudes. Moreover, momentum-resolved electron dynamics in the excited states can be tackled by this approach. For instance, 𝐤 selective photoexcitations in graphene are demonstrated here, where three distinct photoexcitation modes located at different 𝐤points in the reciprocal space are induced upon laser illumination. This kind of 𝐤-dependent electronic dynamics is ubiquitous in extended systems such as periodic solids and interfaces. Therefore, we expect highly efficient 𝐤-resolved rt-TDDFT algorithms employing local bases be an important development and will be widely used in first-principles simulations of ultrafast phenomena under a strong field and optimal control of quantum materials. § METHODOLOGYThe main framework of 𝐤-resolved rt-TDDFT algorithm is inherited from an earlier single-Γ version of Time Dependent ab initio Package (), <cit.> which is based on the  <cit.> package. In such a rt-TDDFT algorithm, the flowchart of a given ionic step is shown in Fig. <ref>. Each process is described in detail in the Secs. II A-II G, marked with the same labels as in Fig. <ref>. Here atomic units ħ = m_e = e = 1 are used throughout this work.§.§ Hamiltonian and overlap matrixAdopting periodical boundary conditions, the lattice of an extended system are denoted as 𝐑_s (s = 1, 2, 3, ..., N) and the atoms i in the unit cell are located at positions 𝐛_i, where N is truncated to construct a finite supercell. A set of numerical atomic-centered orbitals (NAOs) {ξ_iα} is associated with each atom in the simulated system, where α denotes both the orbital and angular quantum number of an atomic orbital,each expressed in multiple radial basis functions ζ <cit.>. Here, since all the operators and functions are time-dependent, we only denote the explicit dependence on t as f(t) and omit t for implicit dependence.Overlap matrix S_𝐤 and Hamiltonian H_𝐤 at the each 𝐤 point in the reciprocal space are expressed with NAOs:S_iα,jβ,𝐤 = ∑_s e^-i 𝐤·𝐑_s⟨ξ_iα(𝐫 + 𝐑_s + 𝐛_i) | ξ_jβ(𝐫 + 𝐛_j)|,⟩ H_iα,jβ,𝐤 = ∑_s e^-i 𝐤·𝐑_s⟨ξ_iα(𝐫 + 𝐑_s + 𝐛_i) |Ĥ| ξ_jβ(𝐫 + 𝐛_j)|,⟩where Ĥ = T̂ + ∑ V_I^local(𝐫) + ∑ V_I^KB + V^H(𝐫, ρ(𝐫))+ V^XC(𝐫, ρ(𝐫)) + V^ext(𝐫)is the Hamiltonian operator. Here T̂ = 1/2∇_𝐫^2 is the kinetic energy operator, I is the index for atoms, V_I^local and V_I^KB are the local and Kleinman-Bylander parts of the pseduopotential for the Ith atom, V^H is the Hartree potential, V^XC is the exchange-correlation (XC) potential and V^ext is the potential of external field. Details in the calculation of ⟨ξ_iα(𝐫 + 𝐑_s + 𝐛_i)|Ĥ| ξ_jβ(𝐫 + 𝐛_j)|$⟩ are described in Ref. Ordejon1996. Within adiabatic local density approximation (LDA) and generalized gradient approximations (GGA) <cit.> for the exchange-correlation functional,V^XCdoes not depend explicitly on timet, i.e.V^XC[ρ(𝐫,t),t] = V^XC[ρ(𝐫,t)]. Thus, most XC functionals in ground-state DFT such as Perdew-Wang <cit.>, Perdew-Burke-Ernzerhof <cit.>, Becke-Lee-Yang-Parr <cit.>, and van der Waals density functional <cit.> are compatible in this implementation of rt-TDDFT. §.§ External fieldTo simulate the laser-matter interactions, time-dependent electric field𝐄(t)is introduced to the Hamiltonian to represent the external time-dependent laser field in two different scenarios: the length gauge and vector gauge. Within the length gauge, the effect of electric field𝐄(t)is added toV^extas a scalar potentialV^ext(𝐫, t) = - 𝐄(t)·𝐫.Time dependent𝐄(t)can be tuned adopting any shape in time evolution. A most popular example is using the shape of a Gaussian wave packet𝐄(t)=𝐄_0cos(2π f t + ϕ)exp[-(t-t_0)^2/2σ^2],wherefis the laser frequency,t_0is the peak time, andϕis the phase factor. We note that, the translational symmetry of Hamiltonian is broken by the introduction of finite external field𝐄in the length gauge, since V^ext(𝐫 + 𝐑_s, t) = - 𝐄(t) · (𝐫 + 𝐑_s) ≠ - 𝐄(t) ·𝐫.Thus, a common solution is using a sawtooth field along spatial directionμ ∈{x,y,z}E_μ(𝐫,t)= E_μ(t) ϵ < x_μ < L_μ - ϵ,- E_μ(t)L_μ/2ϵ -ϵ < x_μ < + ϵ.whereL_μis the length of unit cell alongμandϵ→0. Thus,-E_μ(t)L_μ/2ϵ→∞, which requires that charge density vanishesρ(x_μ) = 0in the region-ϵ< x_μ < + ϵ, otherwise the energy diverges. Thus, a vacuum layer is essential alongμ. The requirement for a vacuum layer limits the application of theoretical approaches using the length gauge field to study the extended systems. Since there is no vacuum layer in the extended bulk systems, the translational symmetry of the Hamiltonian is broken, H(𝐫 + 𝐑_s) ≠ H(𝐫), using the length gauge field.Plus, length gauge field is invalid in large systems and in short wavelength perturbation <cit.>. Dynamical electric field in the vector gauge by introducting vector potential𝐀could preserve the transitional symmetry of Hamiltonian, thus removes the requirement of the vacuum layer. <cit.> The relation between𝐄and𝐀is𝐄 = -1/c∂𝐀/∂ t; 𝐀 = -c∫𝐄 dt. The Hamiltonian with the presence of𝐀is then H = 1/2m (ħ𝐤 - e/c𝐀)^2 = 1/2m (ħ k + e ∫𝐄 dt)^2 = ħ^2/2m ( 𝐤 + 𝐤_𝐀 )^2, where 𝐤_𝐀 = e/ħ∫𝐄 dt = √(2)∫𝐄 dt. within Rydberg atomic unit, wheree = √(2),ħ= 1andt = ħ/Ry. The unit of𝐤_𝐀isBohr^-1, the same as the unit of𝐤. §.§ Propagation With time-dependent (TD) Hamiltonian and overlap matrix, TDKS equation is solved to obtain|u_n𝐤(𝐫,t+Δt)⟩from the state|u_n𝐤(𝐫,t)⟩at the previous time step:|u_n𝐤(𝐫, t_2)⟩ = exp[-i S^-1_𝐤(t') H_𝐤(t')Δ t ] |u_n𝐤(𝐫, t_1)⟩. whereΔt = t_2 - t_1is the length of time step,|u_n𝐤(𝐫,t)⟩is Bloch function andt' ≈(t_1 + t_2)/2. It is rather difficult to evaluateH_𝐤(t')andS_𝐤(t')directly. BecauseΔtis quite small (< 0.05fs), the ion positions barely changes fromt_1tot_2. SinceS_𝐤(t)is only determined by ionic positions (Eq. (<ref>)), it is accurate enough to assumeS_𝐤(t') ≈S_𝐤(t_2). However,H_𝐤(t)may largely change due to the rapid evolution of electrons. To approximate H_𝐤(t') properly, mid-point technique has been widely used <cit.>. Note that, |u_n𝐤(𝐫)(t_2)⟩ is not explicitly dependent on other TDKS orbitals |u_n'𝐤'(𝐫)(t_1)⟩ (n' ≠ n or 𝐤' ≠𝐤), as a result of the v-representativity of the TDKS equations <cit.>. It decouples the evolution equations of different TDKS orbitals and make TDDFT calculations practical. However, itnevertheless can account for both interband and intraband scatterings. Because H_𝐤 is determined by the total charge density, which is a weighted summation of all the occupied orbitals, there still exists an implicit coupling between different TDKS orbitals. Numerically, the time propagatorexp(-i S^-1_𝐤H_𝐤Δt)in Eq. (<ref>) is expanded using first-order Crank-Nicholson scheme: |u_n𝐤(𝐫, t_2)⟩ = 1-i S^-1_𝐤H_𝐤Δ t/2/1+i S^-1_𝐤H_𝐤Δ t/2|u_n𝐤(𝐫, t_1)⟩. Technically, since computingS^-1_𝐤is the most time-consuming part in the calculation of Eq. (<ref>), we minimize the times for its computing.S^-1_𝐤is only updated when atomic positions, thus the center of NAOs,𝐛_iare changed. Consequently, when ions are fixed,S^-1_𝐤is computed only once at the first ionic step. Even with ions moving,S^-1_𝐤only need to be updated once for each ionic step. §.§ Updating charge density With|u_n𝐤(𝐫,t_2)⟩solved in Eq. (<ref>), the density matrix (DM)ρ_iα,jβ(t_2)is computed accordingly as: ρ_iα,jβ(t_2)= ∑_n∑_𝐤 q_n,𝐤|u_n𝐤(𝐫, t_2)⟩⟨u_n𝐤(𝐫, t_2)| = ∑_n∑_𝐤 q_n,𝐤 c^*_n,iα,𝐤(t_2) c_n,jβ,𝐤(t_2), whereq_n,𝐤is the electronic population of the bandnat𝐤, andc_n,jβ,𝐤(t_2)is the coefficient of|u_n𝐤(𝐫, t_2)⟩in NAO basis: |u_n𝐤(𝐫, t_2)⟩ = ∑_jβ c_n,jβ,𝐤(t_2) ξ_jβ(𝐫).§.§ Self-consistent evolution We use the self-consistent process described in Ref. <cit.> during the time evolution of charge density. This process substantially increases the numerical stability <cit.>. All the criteria for convergence test developed in SIESTA are compatible with the current approach, such as using the maximum element of the DM difference, the energy difference, or the Harris energy difference, etc. as a criterion for achieving self-consistency. <cit.>. Here, we use DM difference as an example. Convergence in charge density during time evolution is reached when max{|ρ^new_iα,jβ - ρ_iα,jβ|} < η, whereηis about 10^-4. §.§ Mixing If not converged, the linear mixing of DM is needed to generate the input DM for computing charge densityρ_nextat the next cycle, instead of usingρ_newdirectly, ρ = (1-w)ρ + wρ_new, where theρon the right side of Eq. (<ref>) is the input DM andρ_newis the output DM, andwis the mixing weight, usuallyw = 0.1 - 0.5. §.§ Postprocessing If self-consistent time evolution of charge density is converged, the postprocessing steps are evoked, including the calculation of total energy, Hellmann-Feynman forces, ionic movements, etc. These functions are implemented in SIESTA <cit.> and compatibly used in TDAP <cit.>. We note that, rt-TDDFT in atomic orbital basis gives rise to additional Pulay terms that contribute to the force evaluations<cit.>. The total force is the combination of Hellmann-Feynman force and Pulay term.With the calculated forces, the coupled electron-ion motion can be simulated based on classical ionic trajectories, in the framework of Ehrenfest dynamics. In Ehrenfest dynamics, the forces on the ions are averaged over the adiabatic electronic states along all possible ionic paths. If one path is dominating or many similar potential energy surfaces are involved, Ehrenfest dynamics works very well <cit.>; otherwise, classical trajectory approximations in Ehrenfest dynamics become less accurate <cit.>. Furthermore, detailed balance for quantum electronic states is absent in the Ehrenfest dynamics. Thus, the applications of the present method are limited to thecases where the averaged potential energy surfaces yields a reasonable description of coupled electron-ion dynamics. Since we focus on the dynamics of excited electrons in this work, the ions are fixed in the simulations.Here we introduce some analysis in detail for typical rt-TDDFT simulations. First, we could evaluate the state-to-state transition probabilities between TDKS orbitals during time evolution <cit.>: P_nn'𝐤 = |C_nn'𝐤|^2 = |⟨v_n𝐤|S_k|u_n'𝐤|⟩|^2, where|v_n𝐤⟩is the adiabatic basis satisfying H_𝐤|v_n𝐤(𝐫)⟩ = E_n𝐤 S_𝐤|v_n𝐤(𝐫)⟩. The population𝓆_n𝐤of the adiabatic staten𝐤is thus projected from the TDKS orbitals at a given time as: 𝓆_n𝐤 =∑_n'∈ n_𝐤,occ q_n'𝐤 P_nn'𝐤, wheren_𝐤,occis the occupied state at𝐤point. For finite systems and surface slabs, we can calculate time-dependent dipole moment along the direction. For periodic systems, the dipole moment is ill-defined. Instead, we calculate time dependent current, 𝐣 = -ieħ/m∑_n ( ⟨u_n𝐤|∇|u_n𝐤|-⟩⟨u_n𝐤|∇|u_n𝐤|^⟩*), as the response function. § RESULTS AND DISCUSSION§.§ Momentum-resolved versus supercell approaches To demonstrate the𝐤-resolved algorithm, we choose graphene as the model system (see Fig. <ref>(a)). An exotic property of graphene (also of other Dirac materials) is the linear dispersion near K point, namely,E(𝐤) = v_F𝐤, whereEthe band energy andv_Fis the Fermi velocity which could reach10^6m/s. To describe all the Bloch electrons, especially those near the Fermi energy, two kinds of strategies are used: unit cell calculations with𝐤-resolved reciprocal space samplings or a supercell approach withΓ-only𝐤-sampling. To demonstrate the advantages of the𝐤-resolved algorithm, we compare three cases: (i) unit cell with the Monkhorst-Pack <cit.>N_k ×N_k ×1𝐤point mesh, to cover all important special𝐤-pointsM,ΓandK, facilitating a line-mode analysis alongM →Γ→K →M[Fig. <ref>(b)]; (ii)N_c×N_c ×1supercell with singleΓpoint; and (iii)N_c×N_c ×1supercell with singleKpoint. To compare the computation accuracy of these three cases, we define an error functionΔas, Δ = 1/T∫_0^T |E_ex(t) - E^ref_ex(t)|dt, whereTis the total simulation time,E^ref_exis the excitation energy of the reference case andE_ex(t)is the excitation energy E_ex(t) = E_KS(t) - E_KS(t=0), whereE_KSis the total energy of the system. Here, we evaluateΔunder such settings: the Gaussian-shaped laser pulse [Eq. (<ref>)] withϕ= 0,t_0 = 7.0fs,σ= 2.0fs,f = 21.93eV is applied; the total simulation time isT = 20fs; and the reference energyE^ref_exis calculated with60×60×1𝐤-point mesh. A diagram to illustrate the definition ofΔis shown in the inset of Fig. <ref>. The time step is chosen to Δ t = 0.02 fs and the total time is 20 fs. Troullier-Martin pseudopotentials <cit.>, adiabatic local density approximation (ALDA) exchange-correlation functional <cit.> and an auxiliary real-space grid equivalent to a plane-wave cutoff of 75 Ry are used. In description of C atoms, we use a basis set of 8 double-ζ orbitals {2s(2ζ), 2p_x(2ζ), 2p_y(2ζ), 2p_z(2ζ)} and 5 polarization orbitals {P_d_xy, P_d_yz, P_d_z^2, P_d_xz, P_d_x^2-y^2}. We calculate the test cases with one 8-core Intel(R) Xeon(R) CPU E5-2650@2.00GHz. We plotΔof these three cases in Fig. <ref>. The errorΔdecreases asN = N_k(orN_c) increases. The absolute value ofΔon the the same scale is achieved withN_c = N_k. That is to say, the unit cell approach withN ×N ×1𝐤-point mesh is as accurate as the approach using aN×N ×1supercell. To achieve an accuracy with theΔ≤2meV/atom,N_k= 24is needed. Thus, it can be predicted thatN_C = 24is needed for the supercell approach. However, we emphasize that the computational cost for calculatingN_c×N_c ×1supercell is extremely heavy. As shown in Fig. <ref>, solving Eq. (<ref>) dominates (∼80%) the computer time consumption at largeN_k(N_c), which scales linearly with the total number of𝐤points,N_k^2, and quadratically with the total number of atoms,N_c^2. The CPU clock timet_capproximately scales as O(N^2_k×N_c^4). Thus,t_c = N_c^4for supercell calculation, whilet_c = N_k^2for𝐤-resolved calculations at the same level of accuracy. AsNincreases, this difference become more significant. For supercell calculations, we are able to only compute supercells up toN_C = 9, which already costs over2×10^3min. At the same accuracy level,N_k = 9calculation costs only20min, which is only 1/100 of that forN_C = 9case, consistent with the time complexity analysisN_k^2/N_c^4 = 1/81. As mentioned above,N_k= 24orN_C = 24is needed for relatively accurate calculations. To fulfill this requirement, calculation withN_k = 24costs only about 1 hour, showing that it is readily accessible and efficient. In contrast, calculating aN_c = 24supercell would require a computer time over 576 hours (24 days) and thus heavy in real applications. Regarding the computational accuracy and efficiency, we choose𝐤-point mesh24×24×1to achieve an extremely dense sampling of the Brillouin zone. With the small unit cell of graphene, the number of real space grids is ∼1000, which is 30 times of the number of NAOs used. Considering the evolution algorithm has the computational complexity of O(n^2), where n is the number of basis functions, the computer time for wavefunction evolution using NAO basis is largely reduced to 1/90 of that using real space grid basis. In practical calculations using the same number of message-passing-interface (MPI) processes, the reduction in the total computer time is tested to be about 1/5 to 1/10 depending on the systems under consideration <cit.>. §.§ Out-of-plane excitation We then adopt a laser field perpendicularly polarized to the graphene plane to excite electrons in graphene, i.e. in a set-up of small angle scattering. Since there is a vaccum layer along the out-of-plane direction, the laser field in the length gauge can be used. We first calculate the dielectric function of graphene to locate the photon energy for resonant excitation,α_μ,ν, whereμ,νdenote the spatial directionμ,ν∈{x,y,z}. Theα_μ,νdescribes the response of dipole momentP_μ(ω)to the electric fieldE_ν(ω)in the frequency domain, P_μ(ω) = α_μ,ν(ω) E_ν(ω). In rt-TDDFT calculations, we apply the electric fieldE_ν(t)and obtain the dipole momentP_μ(t)in time domain. Then we carry out the Fourier transform to get Eq. [<ref>], ∫ P_μ(t)exp(iω t) dt = α_μ,ν(ω) ∫E_ν(t)exp(iω t) dt. We then obtain α_μ,ν(ω) = ∫P_μ(t)exp(iω t) dt/∫E_ν(t)exp(iω t) dt. In principleE_ν(t)can be in an arbitrary shape with time. However, in practice, it is better to choose Dirac functionE^δ_ν(t) = E_ν0 δ(t), or the Heaviside step functionE^θ_ν(t) = E_ν0 [1 - θ(t)]to include componentsE_ν(ω)at allω, since we have E^θ_ν(ω) = ∫ E_ν0[1 - θ(t)]exp(iω t) dt = E^0_ν/iω. Here we choose the latter form: E^θ_ν(t)= E^0_ν [1 - θ(t)] =E^0_νt ≤ 00t > 0 , which leads to α_μ,ν(ω) = iω/E^0_ν∫ P_μ(t)exp(iω t) dt. Importantly,Im{α_μ,μ(ω)}characterizes the optical absorbance atωalong theμdirection. We calculate the imaginary part of the dielectric function along the out-of-planezdirection of graphene,Im{α_z,z(ω)}. As shown in Fig. <ref>,Im{α_z,z(ω)}are almost the same with the increase ofE^0from0.05to0.5V/Å, indicating the linear response theory is appropriate in this range of light illumination. The absorption peaks are located at relative high energies (> 20eV). The first absorption peak is located at21.93eV. We choose this photon energy to simulate the resonant excitation of graphene in the perpendicular direction. We demonstrate the excitation dynamics of graphene at a resonant light frequency ofω_r = 21.93eV, and compare to the case at the non-resonant light frequency ofω_nr = 2.0eV. We characterize the overall excitation through tracking the number of excited electrons, as well as the total energy change during the excitation process as a function of time. The number of excited electronsn(t)is calculated as, n(t) = ∑_unocc𝓆_n𝐤(t), where𝓆_n𝐤(t)is obtained from Eq. (<ref>), andunoccdenotes the unoccupied TDKS states. As shown in Fig. <ref>, different behaviors are observed for the two excitation conditions. The excited electronsn(t)and excitation energyE_ex(t)increases atω_r, while no response is observed atω_nr. The same results are obtained at other non-resonant light frequencies of1.0,2.0,4.0eV. It verifies that the calculated Im{α_z,z(ω)} characterizes well selectivity in optical absorption: only the light with the right photon energy ω, at whichIm{α_z,z(ω)} peaks, has a strong absorption. We discuss the resonant case here. In general,n(t)is similar to the shape of the laser pulse, while two special features are observed. Firstly, the time variation inn(t)has a1.4fs delay from the laser field. This delay represents the intrinsic response time of graphene to laser field, namely, the time needed for light absorption and electronic transitions. Secondly, n(t) decreases but does not vanish after the end of light pulse. Thus, we propose that two kinds of excitation process exist: one is the transient excited electrons, which quickly vanishes after the laser pulse is off; another is the residual excited electrons, which live relatively longer. Residualn(t)would decrease with the occurrence of electron-electron and further electron-phonon scatterings at the time scale of100fs, thus is not observed in our short-time simulation (<20fs). We note that, the dependence on history is absent in the calculations with adiabatic exchange-correlation functionals, which causes less accurate prediction of the lifetime of excited states and ionic forces on a long time scale <cit.>. To verify our assumption about the existence of two kinds of excitation processes, we further distinguish the excitation with𝐤-point resolution. We choose six snapshots of𝓆_n𝐤(t)defined in Eq. (<ref>), as shown in Fig. <ref>. Att = 2.0fs with the absence of laser pulse, no excitation is observed at all𝐤points. Att = 4.0fs, the excitation is still ignorable, while the laser field is just turned on, due to the delay in electronic response we discussed above. At the peak time of the laser pulset = 7.0fs,𝓆_n𝐤(t)shows a significant distribution over many𝐤-points. We mark the dominant excitation mode as L, which involving bondingπand antibondingπbands. Withtincreases from7fs to12fs, the L mode excitation rapidly decreases. In contrast, two new modes (labeled by their locations in the reciprocal space, K_1and K_2) increases and become dominant. K_1and K_2modes maintain within20fs while L mode gradually vanishes. Thus, with the assistance of newly developed𝐤-resolved algorithm, we are able to successfully distinguish these two kinds of excitation processes: L mode produce the transient excited electrons while K_1and K_2modes produce the residual excited electrons. Although K_1and K_2are both long-lived excitations, their time-dependence is quite different. We plot𝓆_n𝐤(t)as a function oftat three𝐤pointsΓ, K_1and K_2, as shown in Fig. <ref>. For L mode (represented by photoexcitation atΓpoint), the clear transient character is demonstrated. The excitation only exists when the laser field is present, consistent with the observations in Fig. <ref>. However, for K_1and K_2modes, new differences are observed. Excited electrons in K_1mode increases monotonically, while𝓆_n𝐤(t)at K_2shows an oscillation with a periodicity ofT_K_2 ∼5fs. These different behaviors are due to different excitation energies of three modes, originated from different band structures at the different𝐤-point. For instance, the oscillation ofK_2mode is analogous to the beating, 𝓆_nK_1(t) = A_0 cos(ω_K_2 - ω_r /2 t) cos(ω_K_2 + ω_r /2 t). For K_2mode,ω_K_2 = 22.75eV is the energy difference between the two electronic bands involved in the optical transition at K_2(initial and final states), andω_2 = ω_r = 21.93eV is the driving photon energy. Beat frequencyT_b = 4π/(ω_1 - ω_2 ) = 5.07fs, which is close to the observed oscillation periodicityT_K_2. Thus, the oscillation of K_2mode is the beat formed by the intrinsic band energy difference and the driving laser frequency. In contrast,K_1mode excitation has very close energies:ω_K_1 = 21.59eV and ω_r = 21.93eV, thus only a half period of the beat (T_K_1 = 12.4fs) is observed in our simulation. For L mode, the excitation energy is19.32eV, far below theω_r. A non-resonant interference shows up instead of beating. The rich photoexcitation phenomena discussed above and the associated complex dynamic behaviors hint for the needs for developing efficient rt-TDDFT algorithms with momentum resolution. By introducing a new degree of freedom in the reciprocal space, the𝐤-resolved dynamics labels the distinct excitation processes as well as final distribution of excited states in the Brillouin zone after the incidence of laser pulses. §.§ In-plane excitation For a laser pulse with its field polarization lying parallel to the atomic plane of graphene (i.e., normal incidence), adding a vacuum layer along the laser polarization direction is not possible for the periodical extended system such as graphene. Therefore we adopt the vector potential approach to simulate the in-plane laser-graphene interaction. The graphene sheet is illuminated with a linearly polarized laser pulse, as shown in Fig. <ref>. We note that in-plane excitation is well described by the Fermi's golden rule. Only the bands with an energy gapΔE_g(𝐤)equal to the photon energyωwill be excited. As a result, in-plane polarized laser excites electrons near the Dirac point for photon energies≤5 eV, see Fig. <ref>. The momentum-resolved simulation will distinguish the photoexcitation induced by a laser pulse with different photon energiesω. Here, we use four different wavelengthsλ=1200 nm, 600 nm, 400 nm, and 300 nm for the laser pulse, corresponding to photon energiesω=1.03, 2.06, 3.10, 4.13 eV, respectively, to excite graphene in the in-plane direction. For simplicity the laser field is polarized perpendicular to the C-C bond of graphene lattice (referred to asydirection). The momentum resolved excitation patterns in the reciprocal space are shown in Fig. <ref> (a), with the corresponding band energy differenceΔE_g(𝐤)shown in Fig. <ref> (b). It is clear that only the𝐤points withΔE_g(𝐤) = ωare excited. This agreement justifies the validation of the vector gauge used in the current TDDFT implementation. Furthermore, we note that the presence of strong laser field breaks the six-fold rotational symmetry of the graphene lattice. For instance, withω= 4.13eV, photoexcitation at twoM'points are absent, while excitations at other symmetricMpoints are observed. This symmetry breaking is caused by presence of linearly polarized laser field along theydirection. It can be explained with a two band model of graphene (see appendix). The excellent agreement on the excitation outcome between the model Hamiltonian and first-principles quantum dynamics simulations justify the validity of our rt-TDDFT algorithm with a vector gauge field. We therefore expect that it is readily applicable to investigate quantum dynamics of a variety of electronic phases such as charge/spin density waves, Mott insulators, valley electronics, and electronic melting in two-dimensional materials and conventional semiconductors. To demonstrate the general applicability of the present approach, we tackle photoexcitation induced electron dynamics in a complex material. The layered transition-metal dichalcogenides such as 1T-TaS_2have been widely studied in literature to understand charge density wave (CDW) physics in real materials, whose structure is shown in Fig. <ref>(a). The 1T-TaS_2is a typical quasi two-dimensional CDW material with a pristine lattice constant of 3.36 Å in the undistorted 1T phase. In ground state, the lattice undergoes a structural reconstruction forming a√(13)×√(13)superstructure with star-of-David patterns. Laser induced phase dynamics in 1T-TaS_2has been investigated in recent experiments, where its responses to ultrashort laser pulses play a critical role. Here we study the carrier distribution in 1T-TaS_2upon ultrafast laser excitation. As shown in Fig. <ref>(b), the excitation energy strongly oscillates with the field of laser pulse. The excitation energy deposited by the laser pulse is∼12 eV/cell after laser illumination with a photon energy ofħω= 1.55eV and pulse width of 8 fs. The carrier distribution at 20 fs after the passing of the laser pulse is shown in Fig. <ref>(c). The majority of excited electrons and holes are located at energies ranging from-2 to 2 eV near the Fermi level. It indicates that the photoexcitation mainly consists of single-photon processes as well as a minor fraction of two-photon processes (with excited electrons located at∼3 eV and holes at-3eV). § CONCLUSIONS In conclusion, we have developed𝐤-resolved rt-TDDFT algorithms using efficient numerical atomic basis. It enables large-scale rt-TDDFT simulations of extended systems including solids, interfaces, and two-dimensional materials with a rather small unit cell, significantly reducing the heavy computational cost of typically rt-TDDFT simulations. Consequently,𝐤-resolved excitation dynamics in periodical crystal materials are observed. The key advantages of this unique approach includes:i) The 𝐤-resolved real-time evolution algorithm introduces the important 𝐤-space resolution and a new degree of freedom, which is essential to describe key quantities and important physics in photoexcited condensed matter materials. The use of many 𝐤-points with a rather small unit cell also significantly improves the computational efficiency of rt-TDDFT calculations for photoexcitation in solids.ii)Different from approaches using real space grids and all-electron full-potential linearized augmented-planewaves, the adoption of numerical atomic basis in the present implementation reduces the number of required basis functions to one-hundredth of its original value, making rt-TDDFT computation of realistic large systems (comprising∼500 atoms and lasting for∼1000 fs) plausible. In addition, with a relatively small real-space cutoff for NAOs, the order-Nlinear scaling with respect to the system size can be achieved. iii) Both electronic and ionic degree of freedoms are evolved, therefore a complete information onelectronic wavefunctions and ionic movements during real time evolution can be provided for simulations of complex materials and rich phenomena far from equilibrium. When applied to study photoexcitation dynamics of a prototypical model material–graphene, the𝐤-resolved algorithm enables the observation of𝐤selective excitation modes. Three distinct modes are excited, located at different𝐤. In-plance excitation of the Dirac electrons in graphene can be understood by assuming an effective vector field of laser field, via taking into account the angular dependence of optical transition matrix elements. This kind of𝐤dependent electronic dynamics are ubiquitous in solids. Thus,𝐤-resolved rt-TDDFT algorithm is an important development for investigating ultrafast photoexcitation dynamics and electron-electron scattering, and is expected to be widely used in the future. § ACKNOWLEDGEMENT We acknowledge partial financial support from MOST (Grant Nos. 2016YFA0300902 and 2015CB921001), NSFC (Grant Nos. 11774396, 11474328, and 91850120), and CAS (Grant No. XDB07030100). § APPENDIX: THE TWO-BAND MODEL OF GRAPHENE The ground state Hamiltonian of two-band model of graphene reads, H_0(k_x, k_y) = v_F(k_x σ_x + k_y σ_y), wherek_x,k_yis the𝐤coordinate,σis the Pauli matrices,v_Fis the Fermi velocity.v_F = 1eV·Bohr for simplification. The units ofk_xandk_yare chosen as Bohr^-1. The energy unit is thus eV. The eigenvalues and eigenvectors are solved as, E_0 = -√(k_x^2 + k_y^2), ϕ_0 = √(2)/2([ -1; k_x + i k_y/√(k_x^2 + k_y^2) ]),E_1 = √(k_x^2 + k_y^2), ϕ_1 = √(2)/2([1; k_x + i k_y/√(k_x^2 + k_y^2) ]). Thus, the initial state wavefunction is the ground stateψ(t=0) = ϕ_0. A vector field polarized alongycan be introduced as, H'(t) = A(t) σ_y, whereA(t)is the vector gauge field. The time-dependent Hamiltonian is thus, H(t)=H_0 + H'(t). The wavefunction at timetcan be obtained from time-dependent Schrödinger equation, |ψ(t)⟩ = exp[-iH(t)t]|ϕ_0⟩, which can be expanded with|ϕ_0⟩and|ϕ_1⟩basis, |ψ(t)⟩ = c_0(t) |ϕ_0⟩+c_1(t) |ϕ_1⟩, where c_i(t) = ⟨ϕ_i | ψ(t)|⟩ is the time-dependent coefficients. All equations are solved numerically with thepackage <cit.>. We can reproduce the symmetry breaking in the distribution of excited electrons in𝐤space induced by linearly polarized laser. We analyze the coefficients of|c_2(t)|^2withk_x = cosθ, k_y = sinθ, under the vector field𝐀 = 0.2Bohr^-1, whereθis the angle between𝐤and𝐀, as shown in Fig. <ref>. Thus, the energy differenceΔE_g(𝐤) = 2.0eV. These𝐤points are only excited withω= 2.0eV, consistent with the results from TDDFT and Fermi's golden rule. To explain the origin of the symmetry breaking, the excited electrons at different𝐤points at the end of laser pulse|c_2(t=50 fs)|^2are shown in Fig. <ref>(a). It suggests that, the effect of linearly polarized laser on point𝐤is not solely characterized by the𝐀field, but also related to the angleθbetween𝐤and𝐀. Withθ= 0andπ, i.e. the𝐤is parallel/anti-parallel to the𝐀field, the excitation is fully suppressed, while the excitation is the maximum withθ= π/2and3π/2. An effective fieldA_eff = Asinθ, always perpendicular to the vector𝐤, is thus introduced to induce electronic transitions at𝐤=(k_x = cosθ, k_y = sinθ), as shown in Fig. <ref>(b). It explains the origin of the symmetry breaking in TDDFT simultions (Fig. <ref>). The excitations at𝐤points are the results of the combined effects of energy match and the angleθbetween the𝐤 - 𝐊and𝐀field, where𝐊is the coordinates of the adjacent Dirac point. Since𝐀field is alongy,sinθ=0for all the𝐤points with𝐤 - 𝐊parallel to the polarization direction. Thus, this is no effective field to introduce photoexcitations at the twoM'points. 123fxundefined [1]ifx#1fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftworef[1]@startlink#1@hrefhref[1]#1@endlinkanitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12startlink[1]endlink[0]rl [1]href #1@bib@innerbibempty [Krausz and Ivanov(2009)]RevModPhys.81.163authorauthorF. Krausz and authorM. Ivanov,10.1103/RevModPhys.81.163journaljournalRev. Mod. Phys. volume81,pages163 (year2009)NoStop [Kruchinin, Krausz, andYakovlev(2018)]RevModPhys.90.021002authorauthorS. Y. Kruchinin, authorF. Krausz, and authorV. S. Yakovlev, 10.1103/RevModPhys.90.021002journaljournalRev. Mod. Phys. volume90,pages021002 (year2018)NoStop [Pellegrini, Marinelli, andReiche(2016)]RevModPhys.88.015006authorauthorC. Pellegrini, authorA. Marinelli,and authorS. Reiche,10.1103/RevModPhys.88.015006journaljournalRev. Mod. Phys. volume88, pages015006 (year2016)NoStop [Soler et al.(2002)Soler, Artacho, Gale, García, Junquera, Ordejón, and Sánchez-Portal]Soler2002authorauthorJ. M. Soler, authorE. Artacho, authorJ. D. Gale, authorA. García, authorJ. Junquera, authorP. Ordejón,and authorD. Sánchez-Portal, 10.1088/0953-8984/14/11/302journaljournalJ. Phys. Condens. Matter volume14, pages2745 (year2002)NoStop [Ordejón, Artacho, andSoler(1996)]Ordejon1996authorauthorP. Ordejón, authorE. Artacho,and authorJ. M. Soler,10.1103/PhysRevB.53.R10441journaljournalPhys. Rev. B volume53,pagesR10441 (year1996)NoStop [Ozaki(2003)]PhysRevB.67.155108authorauthorT. Ozaki,10.1103/PhysRevB.67.155108journaljournalPhys. Rev. B volume67,pages155108 (year2003)NoStop [Tsolakidis, Sánchez-Portal, and Martin(2002)]Tsolakidis2002authorauthorA. Tsolakidis, authorD. Sánchez-Portal,and authorR. M.Martin,10.1103/PhysRevB.66.235416journaljournalPhys. Rev. B volume66, pages235416 (year2002)NoStop [Li et al.(2005a)Li, Smith, Markevitch, Romanov, Levis, and Schlegel]Li2005aauthorauthorX. Li, authorS. M. Smith, authorA. N. Markevitch, authorD. A. Romanov, authorR. J. Levis,and authorH. B. Schlegel, 10.1039/B415849KjournaljournalPhys. Chem. Chem. Phys. volume7, pages233 (year2005a)NoStop [Isborn, Li, and Tully(2007a)]Isborn2007authorauthorC. M. Isborn, authorX. Li,andauthorJ. C. Tully,10.1063/1.2713391journaljournalThe Journal of Chemical Physics volume126, pages134307 (year2007a)NoStop [Yamamoto, Noguchi, andWatanabe(2006)]Yamamoto2006authorauthorT. Yamamoto, authorT. Noguchi, and authorK. Watanabe,10.1103/PhysRevB.74.121409journaljournalPhys. Rev. B volume74, pages121409 (year2006)NoStop [Qian et al.(2006)Qian, Li, Lin, and Yip]Qian2006authorauthorX. Qian, authorJ. Li, authorX. Lin,and authorS. Yip, 10.1103/PhysRevB.73.035408journaljournalPhys. Rev. B volume73, pages035408 (year2006)NoStop [Tong and Chu(2001)]Tong2001authorauthorX.-M. Tong and authorS.-I. Chu, 10.1103/PhysRevA.64.013417journaljournalPhys. Rev. A volume64, pages013417 (year2001)NoStop [Tong and Chu(1998)]Tong1998authorauthorX.-M. Tong and authorS.-I. Chu, 10.1103/PhysRevA.57.452journaljournalPhys. Rev. A volume57, pages452 (year1998)NoStop [Tong and Chu(1997)]Tong1997authorauthorX.-M. Tong and authorS.-I. Chu, 10.1016/S0301-0104(97)00063-3journaljournalChem. Phys. volume217,pages119 (year1997)NoStop [Heslar et al.(2007)Heslar, Carrera, Telnov, and Chu]Mirzaei2010authorauthorJ. Heslar, authorJ. Carrera, authorD. Telnov,and authorS. I. Chu, 10.1002/qua.21491journaljournalInt. J. Quantum Chem. volume107, pages3159 (year2007)NoStop [Nobusada and Yabana(2004)]Nobusada2004authorauthorK. Nobusada and authorK. Yabana,10.1103/PhysRevA.70.043411journaljournalPhys. Rev. A volume70, pages043411 (year2004)NoStop [Ganeev et al.(2015)Ganeev, Suzuki, Yoneya, and Kuroda]Ganeev2015authorauthorR. A. Ganeev, authorM. Suzuki, authorS. Yoneya,and authorH. Kuroda, 10.1063/1.4905902journaljournalJ. Appl. Phys. volume117, pages023114 (year2015)NoStop [Lopata et al.(2012)Lopata, Van Kuiken, Khalil, and Govind]Lopata2012authorauthorK. Lopata, authorB. E. Van Kuiken, authorM. Khalil, and authorN. Govind,10.1021/ct3005613journaljournalJ. Chem. Theory Comput. volume8, pages3284 (year2012)NoStop [Fernando, Balhoff, andLopata(2015)]Fernando2015authorauthorR. G. Fernando, authorM. C. Balhoff,and authorK. Lopata,10.1021/ct500943mjournaljournalJ. Chem. Theory Comput. volume11, pages646 (year2015)NoStop [Tussupbayev et al.(2015)Tussupbayev, Govind, Lopata, andCramer]Tussupbayev2015authorauthorS. Tussupbayev, authorN. Govind, authorK. Lopata,and authorC. J. Cramer, 10.1021/ct500763yjournaljournalJ. Chem. Theory Comput. volume11, pages1102 (year2015)NoStop [Lopata and Govind(2013)]Lopata2013authorauthorK. Lopata and authorN. Govind,10.1021/ct400569sjournaljournalJ. Chem. Theory Comput. volume9, pages4939 (year2013)NoStop [Raghunathan and Nest(2012)]Raghunathan2012authorauthorS. Raghunathan and authorM. Nest,10.1021/ct200905zjournaljournalJ. Chem. Theory Comput. volume8,pages806 (year2012)NoStop [Williams-Young, Goings, andLi(2016)]Williams-Young2016authorauthorD. Williams-Young, authorJ. J. Goings,and authorX. Li, 10.1021/acs.jctc.6b00693journaljournalJ. Chem. Theory Comput. , pagesacs.jctc.6b00693 (year2016)NoStop [Bruner, LaMaster, andLopata(2016)]Bruner2016aauthorauthorA. Bruner, authorD. LaMaster, and authorK. Lopata,10.1021/acs.jctc.6b00511journaljournalJ. Chem. Theory Comput. volume12,pages3741 (year2016)NoStop [Provorse, Habenicht, andIsborn(2015)]Provorse2015authorauthorM. R. Provorse, authorB. F. Habenicht,and authorC. M. Isborn,10.1021/acs.jctc.5b00559journaljournalJ. Chem. Theory Comput. volume11, pages4791 (year2015)NoStop [Fischer, Cramer, and Govind(2015)]Fischer2015authorauthorS. A. Fischer, authorC. J. Cramer, and authorN. Govind,10.1021/acs.jctc.5b00473journaljournalJ. Chem. Theory Comput. volume11,pages4294 (year2015)NoStop [Repisky et al.(2015)Repisky, Konecny, Kadek, Komorovsky, Malkin, Malkin, andRuud]Repisky2015authorauthorM. Repisky, authorL. Konecny, authorM. Kadek, authorS. Komorovsky, authorO. L. Malkin, authorV. G. Malkin,and authorK. Ruud, 10.1021/ct501078djournaljournalJ. Chem. Theory Comput. volume11, pages980 (year2015)NoStop [Lopata and Govind(2011)]Lopata2011authorauthorK. Lopata and authorN. Govind,10.1021/ct200137zjournaljournalJ. Chem. Theory Comput. volume7, pages1344 (year2011)NoStop [Nguyen and Parkhill(2015)]Nguyen2015authorauthorT. S. Nguyen and authorJ. Parkhill,10.1021/acs.jctc.5b00262journaljournalJ. Chem. Theory Comput. volume11, pages2918 (year2015)NoStop [Zheng et al.(2016)Zheng, Xie, Jiang, and Lan]Zheng2016authorauthorJ. Zheng, authorY. Xie, authorS. Jiang,and authorZ. Lan, 10.1021/acs.jpcc.5b09921journaljournalJ. Phys. Chem. C volume120, pages1375 (year2016)NoStop [Donati et al.(2016)Donati, Lingerfelt, Petrone, Rega, and Li]Donati2016authorauthorG. Donati, authorD. B. Lingerfelt, authorA. Petrone, authorN. Rega,and authorX. Li, 10.1021/acs.jpca.6b06419journaljournalThe Journal of Physical Chemistry A volume120,pages7255 (year2016)NoStop [Petrone et al.(2014)Petrone, Lingerfelt, Rega, andLi]Petrone2014authorauthorA. Petrone, authorD. B. Lingerfelt, authorN. Rega, and authorX. Li,10.1039/C4CP04000GjournaljournalPhys. Chem. Chem. Phys. volume16, pages24457 (year2014)NoStop [Chapman, Liang, and Li(2011)]Chapman2011authorauthorC. T. Chapman, authorW. Liang, and authorX. Li,10.1021/jz200339yjournaljournalThe Journal of Physical Chemistry Letters volume2,pages1189 (year2011)NoStop [Donati et al.(2017a)Donati, Wildman, Caprasecca, Lingerfelt, Lipparini, Mennucci, and Li]Donati2017aauthorauthorG. Donati, authorA. Wildman, authorS. Caprasecca, authorD. B. Lingerfelt, authorF. Lipparini, authorB. Mennucci,and authorX. Li, 10.1021/acs.jpclett.7b02320journaljournalThe Journal of Physical Chemistry Letters volume8,pages5283 (year2017a)NoStop [Ding et al.(2015a)Ding, Lingerfelt, Mennucci, and Li]Ding2015authorauthorF. Ding, authorD. B. Lingerfelt, authorB. Mennucci,and authorX. Li,10.1063/1.4906083journaljournalThe Journal of Chemical Physics volume142, pages034120 (year2015a)NoStop [Chapman, Liang, and Li(2013)]Chapman2013authorauthorC. T. Chapman, authorW. Liang, and authorX. Li,10.1021/jp312525jjournaljournalThe Journal of Physical Chemistry A volume117,pages2687 (year2013)NoStop [Nguyen et al.(2012)Nguyen, Ding, Fischer, Liang, andLi]Nguyen2012authorauthorP. D. Nguyen, authorF. Ding, authorS. A. Fischer, authorW. Liang,and authorX. Li,10.1021/jz301042fjournaljournalThe Journal of Physical Chemistry Letters volume3, pages2898 (year2012)NoStop [Liang et al.(2012)Liang, Chapman, Ding, and Li]Liang2012authorauthorW. Liang, authorC. T. Chapman, authorF. Ding,and authorX. Li, 10.1021/jp2123899journaljournalThe Journal of Physical Chemistry A volume116, pages1884 (year2012)NoStop [Kasper et al.(2018)Kasper, Lestrange, Stetina, and Li]Kasper2018authorauthorJ. M. Kasper, authorP. J. Lestrange, authorT. F. Stetina,and authorX. Li, 10.1021/acs.jctc.7b01279journaljournalJournal of Chemical Theory and Computation volume14, pages1998 (year2018)NoStop [Goings et al.(2016)Goings, Kasper, Egidi, Sun, andLi]Goings2016authorauthorJ. J. Goings, authorJ. M. Kasper, authorF. Egidi, authorS. Sun,and authorX. Li,10.1063/1.4962422journaljournalThe Journal of Chemical Physicsvolume145, pages104107 (year2016)NoStop [Haruyama, Hu, and Watanabe(2012)]Haruyama2012authorauthorJ. Haruyama, authorC. Hu,andauthorK. Watanabe,10.1103/PhysRevA.85.062511journaljournalPhys. Rev. A volume85, pages062511 (year2012)NoStop [Haruyama et al.(2012)Haruyama, Suzuki, Hu, and Watanabe]Haruyama2012aauthorauthorJ. Haruyama, authorT. Suzuki, authorC. Hu,and authorK. Watanabe, 10.1103/PhysRevA.85.012516journaljournalPhys. Rev. A volume85, pages012516 (year2012)NoStop [Hu et al.(2013)Hu, Tsukagoshi, Sugino, and Watanabe]Hu2013aauthorauthorC. Hu, authorT. Tsukagoshi, authorO. Sugino,and authorK. Watanabe, 10.1103/PhysRevB.87.035421journaljournalPhys. Rev. B volume87, pages035421 (year2013)NoStop [Silaeva et al.(2015)Silaeva, Uchida, Suzuki, andWatanabe]Silaeva2015authorauthorE. P. Silaeva, authorK. Uchida, authorY. Suzuki,and authorK. Watanabe, 10.1103/PhysRevB.92.155401journaljournalPhys. Rev. B volume92, pages155401 (year2015)NoStop [Yan, Wang, and Meng(2016)]Yan2016authorauthorL. Yan, authorF. Wang,andauthorS. Meng, 10.1021/acsnano.6b01840journaljournalACS Nano volume10, pages5452 (year2016)NoStop [Ding et al.(2014)Ding, Guidez, Aikens, and Li]Ding2014authorauthorF. Ding, authorE. B. Guidez, authorC. M. Aikens,andauthorX. Li, 10.1063/1.4884388journaljournalThe Journal of Chemical Physics volume140, pages244705 (year2014)NoStop [Donati et al.(2017b)Donati, Lingerfelt, Aikens, and Li]Donati2017authorauthorG. Donati, authorD. B. Lingerfelt, authorC. M. Aikens,and authorX. Li, 10.1021/acs.jpcc.7b04451journaljournalThe Journal of Physical Chemistry C volume121, pages15368 (year2017b)NoStop [Donati et al.(2018)Donati, Lingerfelt, Aikens, and Li]Donati2018authorauthorG. Donati, authorD. B. Lingerfelt, authorC. M. Aikens,and authorX. Li, 10.1021/acs.jpcc.8b02425journaljournalThe Journal of Physical Chemistry C volume122, pages10621 (year2018)NoStop [Manjavacas et al.(2014)Manjavacas, Liu, Kulkarni, andNordlander]Manjavacas2014authorauthorA. Manjavacas, authorJ. G. Liu, authorV. Kulkarni,andauthorP. Nordlander,10.1021/nn502445fjournaljournalACS Nano volume8, pages7630 (year2014)NoStop [Barbry et al.(2015)Barbry, Koval, Marchesin, Esteban, Borisov, Aizpurua, and Sánchez-Portal]Barbry2015authorauthorM. Barbry, authorP. Koval, authorF. Marchesin, authorR. Esteban, authorA. G. Borisov, authorJ. Aizpurua,and authorD. Sánchez-Portal, 10.1021/acs.nanolett.5b00759journaljournalNano Lett. volume15, pages3410 (year2015)NoStop [Townsend and Bryant(2012)]Townsend2011aauthorauthorE. Townsend and authorG. W. Bryant,10.1021/nl2037613journaljournalNano Lett. volume12,pages429 (year2012)NoStop [Ma, Wang, and Wang(2015)]Ma2015aauthorauthorJ. Ma, authorZ. Wang,andauthorL.-W. Wang,10.1038/ncomms10107journaljournalNat. Commun. volume6, pages10107 (year2015)NoStop [Yan, Jacobsen, and Thygesen(2011)]Yan2011authorauthorJ. Yan, authorK. W. Jacobsen, and authorK. S. Thygesen, 10.1103/PhysRevB.84.235430journaljournalPhys. Rev. B volume84, pages235430 (year2011)NoStop [Song et al.(2012)Song, Meng, Nordlander, and Gao]Song2012authorauthorP. Song, authorS. Meng, authorP. Nordlander,and authorS. Gao, 10.1103/PhysRevB.86.121410journaljournalPhys. Rev. B volume86, pages121410 (year2012)NoStop [Yan, Yuan, and Gao(2007)]Yan2007bauthorauthorJ. Yan, authorZ. Yuan,andauthorS. Gao, 10.1103/PhysRevLett.98.216602journaljournalPhys. Rev. Lett. volume98, pages216602 (year2007)NoStop [Gao and Yuan(2011)]Gao2011authorauthorY. Gao and authorZ. Yuan, 10.1016/j.ssc.2011.05.001journaljournalSolid State Commun. volume151,pages1009 (year2011)NoStop [Gao(2015)]Gao2015cauthorauthorS. Gao,10.1063/1.4922490journaljournalJ. Chem. Phys. volume142, pages234701 (year2015)NoStop [Song, Nordlander, andGao(2011)]Song2011authorauthorP. Song, authorP. Nordlander, and authorS. Gao,10.1063/1.3554420journaljournalJ. Chem. Phys. volume134, pages074701 (year2011)NoStop [Miyamoto, Rubio, and Tománek(2006)]Miyamoto2006authorauthorY. Miyamoto, authorA. Rubio, and authorD. Tománek, 10.1103/PhysRevLett.97.126104journaljournalPhys. Rev. Lett. volume97,pages126104 (year2006)NoStop [Miyamoto(2007a)]Miyamoto2007aauthorauthorY. Miyamoto,10.1002/pssa.200675331journaljournalPhys. status solidi volume204, pages1925 (year2007a)NoStop [Krasheninnikov, Miyamoto, andTománek(2007)]Krasheninnikov2007authorauthorA. V. Krasheninnikov, authorY. Miyamoto,and authorD. Tománek,10.1103/PhysRevLett.99.016104journaljournalPhys. Rev. Lett. volume99, pages016104 (year2007)NoStop [Miyamoto(2007b)]Miyamoto2007authorauthorY. Miyamoto,10.1063/1.2785952journaljournalAppl. Phys. Lett. volume91,pages113120 (year2007b)NoStop [Zhang and Miyamoto(2009)]Zhang2009authorauthorH. Zhang and authorY. Miyamoto,10.1063/1.3196317journaljournalAppl. Phys. Lett. volume95,pages053109 (year2009)NoStop [Miyamoto, Zhang, and Tom??nek(2010)]Miyamoto2010bauthorauthorY. Miyamoto, authorH. Zhang, and authorD. Tom??nek,10.1103/PhysRevLett.104.208302journaljournalPhys. Rev. Lett. volume104, pages19 (year2010)NoStop [Zhang and Miyamoto(2012)]Zhang2012aauthorauthorH. Zhang and authorY. Miyamoto,10.1103/PhysRevB.85.033402journaljournalPhys. Rev. B volume85, pages033402 (year2012)NoStop [Zhang, Miyamoto, and Rubio(2012)]Zhang2012bauthorauthorH. Zhang, authorY. Miyamoto, and authorA. Rubio,10.1103/PhysRevB.85.201409journaljournalPhys. Rev. B volume85, pages201409 (year2012)NoStop [Miyamoto et al.(2013)Miyamoto, Miyazaki, Takeuchi, Okushi, and Yamasaki]Miyamoto2013authorauthorY. Miyamoto, authorT. Miyazaki, authorD. Takeuchi, authorH. Okushi,and authorS. Yamasaki, 10.1063/1.4820781journaljournalAppl. Phys. Lett. volume103, pages123104 (year2013)NoStop [Meng and Kaxiras(2010)]Meng2010authorauthorS. Meng and authorE. Kaxiras, 10.1021/nl100442ejournaljournalNano Lett. volume10, pages1238 (year2010)NoStop [Meng, Ren, and Kaxiras(2008)]Meng2008aauthorauthorS. Meng, authorJ. Ren,andauthorE. Kaxiras,10.1021/nl801644djournaljournalNano Lett. volume8, pages3266 (year2008)NoStop [Ma, Jiao, and Meng(2014)]Ma2014authorauthorW. Ma, authorY. Jiao,andauthorS. Meng, 10.1021/jp410982ejournaljournalJ. Phys. Chem. C volume118, pages16447 (year2014)NoStop [Ma, Jiao, and Meng(2013)]Ma2013authorauthorW. Ma, authorY. Jiao,andauthorS. Meng, 10.1039/c3cp52458bjournaljournalPhys. Chem. Chem. Phys. volume15, pages17187 (year2013)NoStop [Jiao, Ding, and Meng(2011)]Jiao2011aauthorauthorY. Jiao, authorZ. Ding,andauthorS. Meng, 10.1039/c1cp20540djournaljournalPhys. Chem. Chem. Phys. volume13, pages13196 (year2011)NoStop [Jiao, Ma, and Meng(2013)]Jiao2013aauthorauthorY. Jiao, authorW. Ma,andauthorS. Meng, 10.1016/j.cplett.2013.09.008journaljournalChem. Phys. Lett. volume586, pages97 (year2013)NoStop [Zhang et al.(2017)Zhang, Hong, Lian, Ma, Xu, Zhou, Fu, Liu, andMeng]Zhang2017aauthorauthorJ. Zhang, authorH. Hong, authorC. Lian, authorW. Ma, authorX. Xu, authorX. Zhou, authorH. Fu, authorK. Liu,and authorS. Meng, 10.1002/advs.201700086journaljournalAdv. Sci. volume1700086, pages1700086 (year2017)NoStop [Lian, Zhang, and Meng(2016)]Lian2016authorauthorC. Lian, authorS. B. Zhang, and authorS. Meng,10.1103/PhysRevB.94.184310journaljournalPhys. Rev. B volume94, pages184310 (year2016)NoStop [Bertsch et al.(2000)Bertsch, Iwata, Rubio, and Yabana]Bertsch2000authorauthorG. F. Bertsch, authorJ.-I. Iwata, authorA. Rubio,and authorK. Yabana, 10.1103/PhysRevB.62.7998journaljournalPhys. Rev. B volume62, pages7998 (year2000)NoStop [Marques(2003)]Marques2003authorauthorM. Marques,10.1016/S0010-4655(02)00686-0journaljournalComput. Phys. Commun. volume151, pages60 (year2003)NoStop [Castro et al.(2006)Castro, Appel, Oliveira, Rozzi, Andrade, Lorenzen, Marques, Gross, and Rubio]Castro2006authorauthorA. Castro, authorH. Appel, authorM. Oliveira, authorC. A. Rozzi, authorX. Andrade, authorF. Lorenzen, authorM. A. L.Marques, authorE. K. U.Gross,and authorA. Rubio,10.1002/pssb.200642067journaljournalPhys. status solidi volume243, pages2465 (year2006)NoStop [Andrade et al.(2015)Andrade, Strubbe, De Giovannini, Larsen, Oliveira, Alberdi-Rodriguez, Varas, Theophilou, Helbig, Verstraete, Stella, Nogueira, Aspuru-Guzik, Castro, Marques, and Rubio]Andrade2015authorauthorX. Andrade, authorD. Strubbe, authorU. De Giovannini, authorA. H. Larsen, authorM. J. T. Oliveira, authorJ. Alberdi-Rodriguez, authorA. Varas, authorI. Theophilou, authorN. Helbig, authorM. J. Verstraete, authorL. Stella, authorF. Nogueira, authorA. Aspuru-Guzik, authorA. Castro, authorM. A. L.Marques,and authorA. Rubio,10.1039/C5CP00351BjournaljournalPhys. Chem. Chem. Phys. volume17, pages31371 (year2015)NoStop [Sato et al.(2015a)Sato, Yabana, Shinohara, Otobe, Lee, andBertsch]Sato2014authorauthorS. a. Sato, authorK. Yabana, authorY. Shinohara, authorT. Otobe, authorK.-M. Lee,and authorG. F. Bertsch, 10.1103/PhysRevB.92.205413journaljournalPhys. Rev. B volume92, pages205413 (year2015a)NoStop [Sato et al.(2015b)Sato, Taniguchi, Shinohara, and Yabana]Sato2015authorauthorS. A. Sato, authorY. Taniguchi, authorY. Shinohara,andauthorK. Yabana,10.1063/1.4937379journaljournalJ. Chem. Phys. volume143, pages224116 (year2015b)NoStop [Otobe et al.(2016)Otobe, Shinohara, Sato, and Yabana]Otobe2016authorauthorT. Otobe, authorY. Shinohara, authorS. a. Sato,andauthorK. Yabana,10.1103/PhysRevB.93.045124journaljournalPhys. Rev. B volume93, pages045124 (year2016)NoStop [Yabana et al.(2012)Yabana, Sugiyama, Shinohara, Otobe, and Bertsch]Yabana2012authorauthorK. Yabana, authorT. Sugiyama, authorY. Shinohara, authorT. Otobe,and authorG. Bertsch, 10.1103/PhysRevB.85.045134journaljournalPhys. Rev. B volume85, pages1 (year2012)NoStop [Shinohara et al.(2010a)Shinohara, Kawashita, Iwata, Yabana, Otobe, and Bertsch]Shinohara2010aauthorauthorY. Shinohara, authorY. Kawashita, authorJ.-I. Iwata, authorK. Yabana, authorT. Otobe,and authorG. F. Bertsch, 10.1088/0953-8984/22/38/384212journaljournalJ. Phys. Condens. Matter volume22, pages384212 (year2010a)NoStop [Shinohara et al.(2010b)Shinohara, Yabana, Kawashita, Iwata, Otobe, and Bertsch]Shinohara2010authorauthorY. Shinohara, authorK. Yabana, authorY. Kawashita, authorJ.-I. Iwata, authorT. Otobe,and authorG. F. Bertsch, 10.1103/PhysRevB.82.155110journaljournalPhys. Rev. B volume82, pages155110 (year2010b)NoStop [Otobe, Yabana, and Iwata(2009)]Otobe2009authorauthorT. Otobe, authorK. Yabana, and authorJ.-I. Iwata,10.1088/0953-8984/21/6/064224journaljournalJ. Phys. Condens. Matter volume21,pages064224 (year2009)NoStop [Otobe et al.(2008)Otobe, Yamagiwa, Iwata, Yabana, Nakatsukasa, and Bertsch]Otobe2008authorauthorT. Otobe, authorM. Yamagiwa, authorJ.-I. Iwata, authorK. Yabana, authorT. Nakatsukasa,and authorG. F. Bertsch, 10.1103/PhysRevB.77.165104journaljournalPhys. Rev. B volume77, pages165104 (year2008)NoStop [Wachter et al.(2014)Wachter, Lemell, Burgd??rfer, Sato, Tong, and Yabana]Wachter2014authorauthorG. Wachter, authorC. Lemell, authorJ. Burgd??rfer, authorS. A. Sato, authorX. M. Tong,and authorK. Yabana, 10.1103/PhysRevLett.113.087401journaljournalPhys. Rev. Lett. volume113, pages1 (year2014)NoStop [Krieger et al.(2015)Krieger, Dewhurst, Elliott, Sharma, and Gross]Krieger2015authorauthorK. Krieger, authorJ. K. Dewhurst, authorP. Elliott, authorS. Sharma,and authorE. K. U. Gross, 10.1021/acs.jctc.5b00621journaljournalJ. Chem. Theory Comput. , pages150901135341006 (year2015)NoStop [Elliott et al.(2016)Elliott, Krieger, Dewhurst, Sharma, and Gross]Elliott2016authorauthorP. Elliott, authorK. Krieger, authorJ. K. Dewhurst, authorS. Sharma,and authorE. K. U. Gross, 10.1088/1367-2630/18/1/013014journaljournalNew J. Phys. volume18, pages013014 (year2016)NoStop [Schleife et al.(2012)Schleife, Draeger, Kanai, andCorrea]Schleife2012authorauthorA. Schleife, authorE. W. Draeger, authorY. Kanai, and authorA. A. Correa, 10.1063/1.4758792journaljournalThe Journal of Chemical Physics volume137, pages22A546 (year2012)NoStop [Yost, Yao, and Kanai(2017)]Yost2017authorauthorD. C. Yost, authorY. Yao,andauthorY. Kanai,10.1103/PhysRevB.96.115134journaljournalPhysical Review B volume96, pages115134 (year2017)NoStop [Meng and Kaxiras(2008)]Meng2008authorauthorS. Meng and authorE. Kaxiras, 10.1063/1.2960628journaljournalJ. Chem. Phys. volume129, pages054110 (year2008)NoStop [Yabana and Bertsch(1996)]Yabana1996authorauthorK. Yabana and authorG. F. Bertsch,10.1103/PhysRevB.54.4484journaljournalPhys. Rev. B volume54,pages4484 (year1996)NoStop [Perdew and Zunger(1981)]Perdew1981authorauthorJ. P. Perdew and authorA. Zunger,10.1103/PhysRevB.23.5048journaljournalPhys. Rev. B volume23,pages5048 (year1981)NoStop [Perdew, Burke, and Ernzerhof(1996)]Perdew1996authorauthorJ. P. Perdew, authorK. Burke, and authorM. Ernzerhof,10.1103/PhysRevLett.77.3865journaljournalPhys. Rev. Lett. volume77, pages3865 (year1996)NoStop [Becke(1988)]Becke1988authorauthorA. D. Becke,10.1103/PhysRevA.38.3098journaljournalPhys. Rev. A volume38,pages3098 (year1988)NoStop [Lee, Yang, and Parr(1988)]Lee1988authorauthorC. Lee, authorW. Yang,andauthorR. G. Parr,10.1103/PhysRevB.37.785journaljournalPhys. Rev. B volume37, pages785 (year1988)NoStop [Dion et al.(2004)Dion, Rydberg, Schröder, Langreth, and Lundqvist]Dion2004authorauthorM. Dion, authorH. Rydberg, authorE. Schröder, authorD. C. Langreth,andauthorB. I. Lundqvist,10.1103/PhysRevLett.92.246401journaljournalPhys. Rev. Lett. volume92, pages246401 (year2004)NoStop [Román-Pérez and Soler(2009)]Roman-Perez2009authorauthorG. Román-Pérez and authorJ. M.Soler,10.1103/PhysRevLett.103.096102journaljournalPhys. Rev. Lett. volume103, pages096102 (year2009)NoStop [Lestrange, Egidi, andLi(2015)]Lestrange2015authorauthorP. J. Lestrange, authorF. Egidi, and authorX. Li,10.1063/1.4937410journaljournalThe Journal of Chemical Physics volume143, pages234103 (year2015)NoStop [Yabana et al.(2006)Yabana, Nakatsukasa, Iwata, and Bertsch]Yabana2006aauthorauthorK. Yabana, authorT. Nakatsukasa, authorJ.-I. Iwata,andauthorG. F. Bertsch,10.1002/pssb.200642005journaljournalPhys. status solidi volume243, pages1121 (year2006)NoStop [Goings, Lestrange, andLi(2018)]Goings2018authorauthorJ. J. Goings, authorP. J. Lestrange,and authorX. Li,10.1002/wcms.1341journaljournalWiley Interdisciplinary Reviews: Computational Molecular Sciencevolume8, pagese1341 (year2018)NoStop [Runge and Gross(1984)]Runge1984authorauthorE. Runge and authorE. K. U. Gross,10.1103/PhysRevLett.52.997journaljournalPhys. Rev. Lett. volume52,pages997 (year1984)NoStop [Marques et al.(2012)Marques, Maitra, Nogueira, Gross, and Rubio]Marques2012authorauthorM. A. Marques, authorN. T. Maitra, authorF. M. Nogueira, authorE. Gross,and authorA. Rubio, 10.1007/978-3-642-23518-4titleSpringer, seriesLecture Notes in Physics, Vol. volume837 (publisherSpringer Berlin Heidelberg, addressBerlin, Heidelberg, year2012)NoStop [Ren, Kaxiras, and Meng(2010)]Ren2010aauthorauthorJ. Ren, authorE. Kaxiras,andauthorS. Meng, 10.1080/00268976.2010.491489journaljournalMol. Phys. volume108, pages1829 (year2010)NoStop [Ding et al.(2015b)Ding, Goings, Liu, Lingerfelt, and Li]Ding2015aauthorauthorF. Ding, authorJ. J. Goings, authorH. Liu, authorD. B. Lingerfelt,and authorX. Li,10.1063/1.4930985journaljournalThe Journal of Chemical Physicsvolume143, pages114105 (year2015b)NoStop [Schlegel et al.(2001)Schlegel, Millam, Iyengar, Voth, Daniels, Scuseria, andFrisch]Schlegel2001authorauthorH. B. Schlegel, authorJ. M. Millam, authorS. S. Iyengar, authorG. A. Voth, authorA. D. Daniels, authorG. E. Scuseria,and authorM. J. Frisch, 10.1063/1.1372182journaljournalJ. Chem. Phys. volume114, pages9758 (year2001)NoStop [Li et al.(2005b)Li, Tully, Schlegel, and Frisch]Li2005authorauthorX. Li, authorJ. C. Tully, authorH. B. Schlegel,andauthorM. J. Frisch,10.1063/1.2008258journaljournalThe Journal of Chemical Physics volume123, pages084106 (year2005b)NoStop [Tully(1990)]Tully1990authorauthorJ. C. Tully,10.1063/1.459170journaljournalThe Journal of Chemical Physics volume93, pages1061 (year1990)NoStop [Parandekar and Tully(2006)]Parandekar2006authorauthorP. V. Parandekar and authorJ. C. Tully,10.1021/ct050213kjournaljournalJournal of Chemical Theory and Computation volume2, pages229 (year2006)NoStop [Hack and Truhlar(2000)]Hack2000authorauthorM. D. Hack and authorD. G. Truhlar,10.1021/jp001629rjournaljournalThe Journal of Physical Chemistry A volume104, pages7917 (year2000)NoStop [Rohringer, Peter, andBurgdörfer(2006)]Rohringer2006aauthorauthorN. Rohringer, authorS. Peter, and authorJ. Burgdörfer, 10.1103/PhysRevA.74.042512journaljournalPhys. Rev. A volume74, pages042512 (year2006)NoStop [Monkhorst and Pack(1976)]monkhorst1976specialauthorauthorH. J. Monkhorst and authorJ. D. Pack,10.1103/PhysRevB.13.5188journaljournalPhys. Rev. B volume13,pages5188 (year1976)NoStop [Troullier and Martins(1991)]Troullier1991authorauthorN. Troullier and authorJ. L. Martins,10.1103/PhysRevB.43.8861journaljournalPhys. Rev. B volume43,pages8861 (year1991)NoStop [Lian et al.(2018)Lian, Guan, Hu, Zhang, andMeng]Lian2018authorauthorC. Lian, authorM. Guan, authorS. Hu, authorJ. Zhang,and authorS. Meng, 10.1002/adts.201800055journaljournalAdvanced Theory and Simulations , pages1800055 (year2018)NoStop [Maitra, Burke, and Woodward(2002)]PhysRevLett.89.023002authorauthorN. T. Maitra, authorK. Burke, and authorC. Woodward,10.1103/PhysRevLett.89.023002journaljournalPhys. Rev. Lett. volume89, pages023002 (year2002)NoStop [Elliott et al.(2012)Elliott, Fuks, Rubio, and Maitra]PhysRevLett.109.266404authorauthorP. Elliott, authorJ. I. Fuks, authorA. Rubio,and authorN. T. Maitra, 10.1103/PhysRevLett.109.266404journaljournalPhys. Rev. Lett. volume109, pages266404 (year2012)NoStop [Maitra()]doi:10.1002/qua.20465authorauthorN. T. Maitra,10.1002/qua.20465journaljournalInternational Journal of Quantum Chemistry volume102, pages573NoStop [Ullrich(2006)]doi:10.1063/1.2406069authorauthorC. A. Ullrich,10.1063/1.2406069journaljournalThe Journal of Chemical Physics volume125, pages234108 (year2006)NoStop [Agostini et al.(2015)Agostini, Abedi, Suzuki, Min, Maitra, and Gross]doi:10.1063/1.4908133authorauthorF. Agostini, authorA. Abedi, authorY. Suzuki, authorS. K. Min, authorN. T. Maitra,and authorE. K. U. Gross, 10.1063/1.4908133journaljournalThe Journal of Chemical Physics volume142, pages084303 (year2015)NoStop [Johansson, Nation, andNori(2012)]Johansson2012authorauthorJ. Johansson, authorP. Nation, and authorF. Nori,10.1016/j.cpc.2012.02.021journaljournalComput. Phys. Commun. volume183,pages1760 (year2012)NoStop [Johansson, Nation, andNori(2013)]Johansson2013authorauthorJ. Johansson, authorP. Nation, and authorF. Nori,10.1016/j.cpc.2012.11.019journaljournalComput. Phys. Commun. volume184,pages1234 (year2013)NoStop
http://arxiv.org/abs/1702.08163v4
{ "authors": [ "Chao Lian", "Shi-Qi Hu", "Meng-Xue Guan", "Sheng Meng" ], "categories": [ "cond-mat.mtrl-sci", "cond-mat.mes-hall", "physics.optics" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170227071334", "title": "Momentum-resolved TDDFT algorithm in atomic basis for real time tracking of electronic excitation" }
Effect of nuclear matter incompressibility on the 16O+208Pb system O.N.GhodsiEmail: o.nghodsi@umz.ac.ir, F.TorabiDepartment of Physics, Faculty of Science, University of MazandaranP.O.Box 47415-416, Babolsar, Iran============================================================================================================================================================== To analyze the property of nuclear matter in the 16O+208Pb collision system, the internuclear potential of the fusion reaction is calculated by using the Skyrme forces associated with an extensive nuclear matter incompressibility K range in the semiclassical energy density formalism. Comparison of the experimental fusion cross sections and those obtained by using potentials derived from different forces with various K values shows that the incompressibility of nuclear matter changes during the fusion process at different bombarding energies. The results indicate that, as the energy increases, the nuclear matter becomes more incompressible. 1.5 I. Introduction Fusion in heavy-ion reactions has been one of the most extensively studied topics in nuclear physics over the last decades <cit.>. Various attempts have been made to explain this phenomenon by using a variety of theoretical models based on different assumptions. Taking into account the dynamical mechanism of the fusion process, the interaction potential between two nuclei can be determined by using dynamic approaches such as quantum molecular dynamics and time-dependent Hartree–Fock theory <cit.>. According to the frozen-density approximation, fusion reactions can be also analyzed by using static approaches such as the double-folding model and energy-density formalism <cit.>. By employing different effective nucleon-nucleon interactions in these models and methods, a large number of heavy-ion fusion reactions have been investigated in theoretical low-energy nuclear physics. Among them, the 16O+208Pb system is a candidate that has been widely studied by using static and dynamic approaches <cit.>. Some studies have shown that analysis of the fusion cross-section data of this heavy-ion reaction can help understand the importance of different factors in calculations of the interaction potential, including the energy dependence of the barrier <cit.> and the incompressibility of nuclear matter <cit.>.Nuclear matter incompressibility (K) is a key component of the nuclear matter equation of state (EOS) and has been one of the interesting subjects in studies of heavy-ion fusion reactions. Different versions of the effective interactions resulting in different K values have been used to investigate the role of nuclear matter incompressibility in heavy-ion fusion processes <cit.>. The results obtained revealed that theoretical fusion data is sensitive to the value of K. Therefore, describing the heavy-ion reaction by using different effective interactions with different K values may allow exploration of variations in the incompressibility of nuclear matter during the fusion process at different bombarding energies.Accordingly, in the present study, we are motivated to examine this variation within the 16O+208Pb system. For this purpose, the interaction potential of the chosen system was calculated by using different Skyrme forces associated with K values ranging from 234 to 370 MeV in the semiclassical energy-density formalism. With respect to each force, the neutron and proton densities obtained by the self-consistent quantum-mechanical Hartree–Fock–Bogoliubov (HFB) method were also employed in this formalism. Based on the best agreement achieved between the theoretical fusion cross sections obtained by the potentials derived from different forces and the experimental data, we have shown variation in the nuclear matter incompressibility within the 16O+208Pb system at different bombarding energies.This paper is organized as follows: Section II introduces the Skyrme energy-density-functional model and describes the properties of the colliding nuclei based on the effective interactions employed in this model. Section III presents the calculations and results of analysis of the 16O+208Pb system by using different forces yielding various incompressibility values. Finally, Sec. IV draws the conclusions of this paper. II. Theoretical Formalism A. Semiclassical expression of the Skyrme energy-density functionalIn the energy-density-functional model, the nuclear potential between the interacting nuclei, as a function of separation distance R, is given by V_N(R)=E_T(R)-(E_1+E_2), E_T(R)=∫ℰ[ ρ_1p(r⃗)+ρ_2p(r⃗-R⃗),ρ_1n(r⃗)+ρ_2n(r⃗-R⃗)] d^3r, E_1=∫ℰ[ ρ_1p(r⃗),ρ_1n(r⃗)] d^3r, E_2=∫ℰ[ ρ_2p(r⃗),ρ_2n(r⃗)] d^3r,where E_1 and E_2 denote the energy of the noninteracting nuclei and E_T(R) expresses the energy of the composite system. In these equations, the Skyrme energy density ℰ(r⃗) is defined as ℰ(r⃗)=ħ^2/2mτ+1/2t_0[( 1+1/2x_0) ρ^2-(x_0+1/2) (ρ_n^2+ρ_p^2)] +1/12t_3ρ^α[( 1+1/2x_3) ρ^2-( x_3+1/2) (ρ_n^2+ρ_p^2)] +1/4[t_1( 1+1/2x_1) +t_2(1+1/2x_2) ] (ρτ)-14[t_1(x_1+12) -t_2( x_2+1/2) ] (ρ_nτ_n+ρ_pτ_p) +1/16[3t_1(1+1/2x_1) -t_2(1+1/2x_2) ](∇⃗ρ)^2-1/16[3t_1(x_1+1/2) +t_2(x_2+1/2) ] [(∇⃗ρ_n)^2+(∇⃗ρ_p)^2]+1/2W_0[J⃗.∇⃗ρ+J⃗_⃗n⃗.∇⃗ρ_n+J⃗_⃗p⃗.∇⃗ρ_p]. Here, t_0, t_1, t_2, t_3, x_0, x_1, x_2, x_3, α and W_0 are the Skyrme force parameters determined by fitting different properties of nuclei. m is the nucleon mass, and ρ=ρ_n+ρ_p, τ=τ_n+τ_p, and J⃗=J⃗_⃗n⃗+J⃗_⃗p⃗ are the nuclear, kinetic, and spin-orbit densities, respectively. The kinetic energy and spin-orbit densities are estimated in the semiclassical extended Thomas–Fermi model (ETF).Taking into consideration the ħ^2 correction terms in this model, the functional form of the kinetic-energy density is given by (q=n or p),τ_q(r⃗)= 3/5(3π^2)^2/3ρ_q^5/3+136(∇⃗ρ_q)^2/ρ_q+13Δρ_q+1/6∇⃗ρ_q.∇⃗f_q/f_q+16ρ_qΔf_q/f_q-1/12ρ_q( ∇⃗f_q/f_q) ^2+1/2ρ_q( 2m/ħ^2)^2( W_02∇⃗(ρ+ρ_q)/f_q) ^2,where the effective mass form factor f_q(r⃗) takes the following form:f_q(r⃗)=1+2mħ^21/4[t_1(1+x_1/2) +t_2( 1+x_2/2) ] ρ(r⃗)-2mħ^21/4[t_1( x_1+1/2) -t_2( x_2+1/2) ]ρ_q(r⃗). Because spin is intrinsically a quantum-mechanical property with no direct classical counterpart, the expression of (J⃗ ) in the ETF model isJ⃗_q(r⃗)=-2mħ^212W_01/f_qρ_q∇⃗(ρ+ρ_q). By using these equations, the nuclear part of the interaction potential, V_N (R), is determined by knowledge of the density distributions of the projectile and target nuclei. Then, assuming that ρ^(i)_ch≈ eρ^(i)_p, the Coulomb part is added to the calculations asV_C(R)=∫ρ^(1)_ch(r⃗_⃗1⃗)ρ^(2)_ch(r⃗_⃗2⃗)|R⃗+r⃗_⃗2⃗-r⃗_⃗1⃗|d^3r_1d^3r_2.B. Properties of the interacting nucleiTo date, numerous parametrizations of the Skyrme effective interaction have been published and many of them have been applied in mean-field theories for a variety of purposes. In the present study, some of the available effective interactions that result in an EOS with an extensive range of K values are employed to study the nuclear matter incompressibility in the 16O+208Pb system. The selected forces are SkSC4 <cit.>, Es <cit.>, SKXce <cit.>, E <cit.>, and SI <cit.> with the incompressibility range between 234 and 370 MeV. Based on each force, the neutron and proton densities of the 16O and 208Pb nuclei were computed by using the microscopic HFB method because many properties of the finite nuclei can be described by this approximation. For instance, Fig. 1 shows the radial density distributions obtained from these calculations based on the SkSC4 and SI parameter sets.By using the density distributions calculated in the HFB approach, it was found that all the selected Skyrme forces can reproduce the experimental binding energies and root-mean-square charge radii of the chosen nuclei with the relative deviations less than 4.69% and 2.88%, respectively. Figure 2 shows the percentage of relative deviations of the theoretical binding energies and root-mean-square charge radii from their corresponding experimental data for the SkSC4, Es, SKXce, E, and SI Skyrme forces. These effective forces, which can describe the ground-state properties of the 16O and 208Pb nuclei with reasonable accuracy, are applied to evaluate the nucleus-nucleus potential in the described energy-density-functional model. III. Calculations and Results To perform the calculations in the energy-density formalism, based on each of the selected Skyrme forces, the two-parameter Fermi density distributions were determined by using the parameters obtained from fitting the results of HFB calculations. The calculated diffuseness parameters of the neutron- and proton-density distributions for the 16O and 208Pb nuclei are illustrated in Fig. 3 by using the SkSC4, Es, SKXce, E, and SI Skyrme forces. Employing the determined densities, together with their corresponding Skyrme interactions, we evaluated the interaction potential of the 16O+208Pb system. The characteristics of the calculated fusion barriers, i.e., barrier height and position, are displayed in Fig. 4 based on the Skyrme forces. The results clearly show that increasing the value of K, increases the fusion barrier height and decreases the value of the barrier position. Also, from Figs. 3 and 4, and due to the fact that surface nucleons play a significant role in heavy-ion reactions, one can find that the use of smaller diffuseness parameters in the density distributions decreases the attraction energy and consequently increases the barrier height. By using the nucleus-nucleus potentials derived from different Skyrme forces we analyze here the fusion cross sections of the 16O+208Pb system in different energy ranges, i.e., below, near, and above the barrier. For this purpose, the cross-section data were calculated by using the CCFULL code <cit.>, taking into account the excitations of 2^+ and 3^- states of the target and projectile nuclei. The parameters applied to describe the excitations of these low-lying states for the chosen nuclei were taken from Refs. <cit.>. The results of the calculations based on the potentials obtained from the different forces are shown in Fig. 5 in both logarithmic and linear scales. It can be seen that the theoretical results are obviously influenced by the incompressibility of the Skyrme forces. The interaction potentials calculated from the forces with smaller incompressibility values precisely describe the experimental fusion cross sections <cit.> at low energies, but cannot explain the data at above-barrier energies. Furthermore, it is evident that the potentials obtained from the forces associated with higher incompressibility values can accurately reproduce the fusion cross sections at high energies; however, they cannot predict the data at subbarrier energies. To be more precise, based on this observation, it is found that the Skyrme forces associated with the nuclear incompressibility values ∼234-248 MeV can reproduce the fusion cross sections of 16O+208Pb at energies below and near the barrier, the Skyrme force resulting in K=270 MeV can explain the experimental data at energies in the vicinity and nearly above the barrier, and the forces leading to K>300 MeV can be used to predict the fusion cross sections at energies above the barrier and at higher energies.To demonstrate the importance of the density parameters in these calculations, the fusion cross sections of the chosen system were also computed by using the potentials derived from the different forces and the same sets of density parameters, which were obtained with the SkP Skyrme force <cit.> for the interacting nuclei. The calculated fusion cross sections are illustrated in Fig. 6.As one can observe, in this case, the experimental and theoretical fusion cross sections are not in agreement, which clearly shows that the density parameters play a key role in reproducing the experimental fusion data and in examining the sensitivity of the fusion cross sections to the incompressibility value at different bombarding energies.In addition, to study the nuclear matter incompressibility in the 16O+208Pb system, the fusion barrier distribution, d^2(Eσ_fus)/dE^2, for this system was computed. Figure 7 shows the barrier distributions calculated by using the cross sections derived from different Skyrme forces. The theoretical barrier distributions display almost a similar behavior as found in the prediction of the fusion cross sections. The experimental representation of the barrier distribution at high energies can be better explained by using the cross sections derived from the Skyrme forces yielding higher K values. However, at low energies, the agreement between the experimental and theoretical barrier distributions is achieved by using the data computed from the forces with smaller values for K.According to the results, one can indicate the variation in the nuclear-matter incompressibility within the 16O+208Pb system at different energies. To illustrate this, based on the best agreement achieved between the calculated and experimental fusion cross sections at each energy, the predicted values of the nuclear incompressibility at different bombarding energies are displayed in Fig. 8. As seen, the incompressibility of the nuclear matter increases by increasing the bombarding energy.At each energy, the corresponding temperature T of the compound nucleus, which is displayed on the top horizontal axis of this figure, was calculated by the following formula <cit.>: E^*=E_c.m.+Q_in=1aAT^2-T,where, E^*, E_c.m., and Q_in are the excitation energy of the compound nucleus, the center-of-mass energy of the projectile nucleus, and the entrance-channel Q value, respectively. Moreover, in this equation a = 9 or 10 for intermediate mass or superheavy systems.It can be observed that, by increasing the bombarding energy, the temperature of the compound nucleus increases as well. Therefore, one can expect a variation in the mean-field of the compound system and, consequently, in the property of the nuclear matter as the bombarding energy increases.Efficiency of the described method for other systemsBy using the suitable Skyrme forces and their corresponding density distributions, the described method can be also applied to study the incompressibility of nuclear matter in other fusion reactions. To show the efficiency of this method for other systems, we briefly discuss the results of the theoretical fusion cross sections for the 40Ca+90Zr system.The potentials derived from the SkT4, SkT1*, SK255, and SK272 Skyrme forces <cit.>, which yield the K values in the range between 235 and 272 MeV and can reasonably describe the properties of the interacting nuclei, were selected as the best choices to describe the fusion cross sections of the system at different bombarding energies. By using these potentials, the theoretical fusion cross sections of the 40Ca+90Zr system were computed with the CCFULL code. Figure 9 compares the theoretical results with the experimental data <cit.>. The agreement between the experimental and theoretical fusion cross sections derived from the forces with different incompressibility values shows that, as the bombarding energy increases, the nuclear matter becomes more incompressible.IV. Conclusions The present study examined the variation in the incompressibility of nuclear matter in the 16O+208Pb fusion reaction. To this end, the interaction potential of the system was calculated by using different Skyrme interactions with the K values ranging from 234 to 370 MeV in the energy-density formalism. Analysis of the potentials indicated that the use of Skyrme forces with higher nuclear incompressibility values results in greater barrier heights whose corresponding positions are shifted to closer distances between the interacting nuclei. The fusion cross sections of the chosen system were computed by using the ion-ion potentials and the CCFULL code. The results revealed that the experimental cross sections at subbarrier energies can be accurately described by the potentials derived from the forces with smaller K values. On the other hand, the data at higher energies can be satisfactorily explained by the potentials obtained from the forces associated with higher K values. This trend suggests that an exact fit to fusion cross-section data in different energy ranges can be achieved by using forces with different incompressibility values. Based on the calculations made by the Skyrme energy density formalism and the CCFULL code, one can conclude that nuclear matter during the fusion process changes from less-incompressible matter at low energies to more-incompressible matter at higher energies. In addition, it is worth mentioning that the applied method enables analysis of the property of nuclear matter in the fusion process at different bombarding energies based on a static model.a 1 C. H. Dasso, S. Landowne, and A. Winther, Nucl. Phys. A 405, 381(1983). 2T. Udagawa, B. T. Kim, and T. Tamura, Phys. Rev. C 32, 124 (1985). 3N. Rowley, G. R. Satchler, and P. H. Stelson, Phys. Lett. B 254, 25 (1991). 4W. Reisdorf, J. Phys. G 20, 1297 (1994). 5V. Yu. Denisov and S. Hofmann, Phys. Rev. C 61, 034606 (2000). 6C. H. Dasso and G. Pollarolo, Phys. Rev. C 68, 054604 (2003). 7C. L. Jiang, B. B. Back, H. Esbensen, R. V. F. Janssens, and K. E. Rehm, Phys. Rev. C 73, 014613 (2006). 8Z. Q. Feng, G. M. Jin, and F. S. Zhang, Nucl. Phys. A 802, 91 (2008). 9T. Ichikawa, K. Hagino, and A. Iwamoto, Phys. Rev. Lett. 103, 202701 (2009). 10I. Dutt and R. K. Puri, Phys. Rev. C 81, 047601 (2010). 11S. Ayik, B. Yilmaz, and D. Lacroix, Phys. Rev. C 81, 034605 (2010). 12V. V. Sargsyan, G. G. Adamian, N. V. Antonenko, W. Scheid, and H. Q. Zhang, Phys. Rev. C 84, 064614 (2011). 13K. Hagino and N. Takigawa, Prog. Theor. Phys. 128, 1061 (2012). 14M. S. Gautam, Nucl. Phys. A 933, 272 (2015). 15A. S. Umar and V. E. Oberacker, Phys. Rev. C 74, 021601 (2006). 16K. Washiyama and D. Lacroix, Phys. Rev. C 78, 024610 (2008). 17C. Simenel and B. Avez, Int. J. Mod. Phys. E 17, 31 (2008). 18J. Aichelin and H. Stöcker, Phys. Lett. B 176, 14 (1986). 19J. Aichelin, Phys. Rep. 202, 233 (1991). 20N. Wang, Z. Li, and X. Wu, Phys. Rev. C 65, 064608 (2002). 21G. R. Satchler and W. G. Love, Phys. Rep. 55, 183 (1979). 22L. C. Chamon, G. P. A. Nobre, D. Pereira, E. S. Rossi, Jr., C. P. Silva, L. R. Gasques, and B. V. Carlson Phys. Rev. C 70, 014604 (2004). 23I. I. Gontchar, D. J. Hinde, M. Dasgupta, and J. O. Newton, Phys. Rev. C 69, 024610 (2004). 24R. K. Puri and R. K. Gupta, Phys. Rev. C 45, 1837 (1992). 25V. Yu. Denisov, Phys. Lett. B 526, 315 (2002). 26M. Liu, N. Wang, Z. Li, X. Wu, and E. Zhao, Nucl. Phys. A 768, 80 (2006). 27A. S. Umar, C. Simenel, and V. E. Oberacker, Phys. Rev. C 89, 034611 (2014). 28A. S. Umarand V. E. Oberacker, Eur. Phys. J. A 39, 243 (2009). 29K. Hagino and Y. Watanabe, Phys. Rev. C 76, 021601 (2007). 30H. Esbensen and Ş. Mişicu, Phys. Rev. C 76, 054609 (2007). 31Ş. Mişicu and H. Esbensen, Phys. Rev. Lett. 96, 112701 (2006). 32Ş. Mişicu and H. Esbensen, Phys. Rev. C 75, 034606 (2007). 33Y. Aboussir, J. M. Pearson, A. K. Dutta, and F. Tondeur, Nucl. Phys. A 549, 155 (1992). 34J. Friedrich and P.-G. Reinhard, Phys. Rev. C 33, 335 (1986). 35B. A. Brown, Phys. Rev. C 58, 220 (1998). 36D. Vautherin and D. M. Brink, Phys. Rev. C 5, 626 (1972). 37K. Hagino, N. Rowley, and A.T. Kruppa, Comput. Phys. Commun. 123, 143 (1999). 38S. Raman, C. W. Nestor Jr., and P. Tikkanen, At. Data Nucl. Data Tables 78, 1 (2001). 39T. Kibedi and R. H. Spear, At. Data Nucl. Data Tables 80, 35 (2002). 40C. R. Morton, A. C. Berriman, M. Dasgupta, D. J. Hinde, J. O. Newton, K. Hagino, and I. J. Thompson, Phys. Rev. C 60, 044608 (1999). 41 J. Dobaczewski, H. Flocard, and J. Treiner, Nucl. Phys. A 422, 103 (1984). 42R. K. Puri and R. K. Gupta, J. Phys. G 18, 903 (1992). 43R. K. Gupta, S. Singh, R. K. Puri, A. Sandulescu, W. Greiner, and W. Scheid, J. Phys. G 18, 1533 (1992). 44F. Tondeur, M. Brack, M. Farine, and J. M. Pearson, Nucl. Phys. A 420, 297 (1984). 45B. K. Agrawal, S. Shlomo, and V. Kim Au, Phys. Rev. C 68, 031304 (2003). 46H. Timmers, D. Ackermann, S. Beghini et al., Nuclear Physics, A 633, 421 (1998). FIGURE CAPTIONSFig. 1. The neutron and proton density distributions of (a) the 16O and (b) 208Pb nuclei obtained by using the SkSC4 and SI Skyrme interactions in the HFB approximation.Fig. 2. The percentage relative deviations, i.e., |(Theo. - Exp.)/Exp.|×100, of (a) the theoretical binding energies and (b) root-mean-square charge radii from their experimental data for the 16O and 208Pb nuclei. The incompressibility values corresponding to the Skyrme forces are displayed on the top horizontal axis.Fig. 3. The calculated diffuseness parameters of the neutron and proton density distributions, a_n,p, for (a) the 16O and (b) 208Pb nuclei. The incompressibility values corresponding to the Skyrme forces are displayed on the top horizontal axis.Fig. 4. (a) The theoretical fusion barrier heights and (b) positions calculated from different Skyrme forces for the 16O+208Pb system. The incompressibility values corresponding to the Skyrme forces are displayed on the top horizontal axis.Fig. 5. The fusion cross sections of the 16O+208Pb system calculated withthe potentials obtained from different Skyrme forces. The experimental data were taken from Ref. <cit.>.Fig. 6. The fusion cross sections of the 16O+208Pb system calculated with the potentials derived from different Skyrme forces and the density parameters obtained from the SkP Skyrme force.Fig. 7. The fusion barrier distributions for the 16O+208Pb system calculated by using the cross sections derived from different Skyrme forces and their corresponding density distributions.Fig. 8. The predicted values of the nuclear matter incompressibility in the 16O+208Pb system at different bombarding energies. The temperature of the compound nucleus corresponding to each energy is displayed on the top horizontal axis.Fig. 9. The fusion cross sections of the 40Ca+90Zr system calculated with the potentials obtained from the SkT4, SkT1*, SK255, and SK272 Skyrme forces and their corresponding density distributions. The experimental data were taken from Ref. <cit.>.
http://arxiv.org/abs/1702.08418v1
{ "authors": [ "O. N. Ghodsi", "F. Torabi" ], "categories": [ "nucl-th" ], "primary_category": "nucl-th", "published": "20170227182546", "title": "Effect of nuclear matter incompressibility on the 16O+208Pb system" }
The revival of the Baldwin Effect Mauro Santos December 30, 2023 =================================We introduce a novel kernel that models input-dependent couplings across multiple latent processes. The pairwise joint kernel measures covariance along inputs and across different latent signals in a mutually-dependent fashion.A latent correlation Gaussian process (LCGP) model combines these non-stationary latent components into multiple outputs by an input-dependent mixing matrix. Probit classification and support for multiple observation sets are derived by Variational Bayesian inference. Results on several datasets indicate that the LCGP model can recover the correlations between latent signals while simultaneously achieving state-of-the-art performance. We highlight the latent covariances with an EEG classification dataset where latent brain processes and their couplings simultaneously emerge from the model. Gaussian process, non-stationary kernel, cross-covariance, latent variable modelling§ INTRODUCTIONGaussian processes (GP) are Bayesian non-parametric models that explicitly characterize the uncertainty in the learned model by describing distributions over functions <cit.>. These models assume a prior over functions, and subsequently the function posterior given the data can be derived. The prior covariance plays the key roles of both regularising the model by determining its smoothness properties, and characterising how the underlying function varies in the input space. Recently, there has been interest in deriving non-stationary covariance kernels, where the general signal variances or the intrinsic kernel parameters – such as the lengthscales in the squared exponential or Matérn kernels – are input-dependent <cit.>. For instance, in geostatistical applications, a non-stationary kernel can both model a difference in the covariance along or across geological formations <cit.>. Input-dependent, heteroscedastic noise models have also been studied in single-task <cit.> and in multi-task settings <cit.>.In multi-task learning Gaussian processes are utilized by modeling the output covariances between possibly several latent functions <cit.>. In latent function models[Coined as linear models of coregionalisation (LCM) in geostatistics literature <cit.>.] the outputs are linear combinations of multiple underlying latent functions <cit.>. In Gaussian Process Regression Networks (GPRN) the mixing coefficients of multiple independent latent signals are input-dependent Gaussian processes as well, leading to a general multi-task framework that adaptively combines latent signals into outputs along the input space <cit.>. The main contribution of this paper is to introduce a mutually-dependent Hadamard product kernel that combines a covariance structure between the latent signals that depends on the inputs, with an input kernel that depends on the latent signal indices. The signal and input kernels are interdependent, conditional on each other. This is in contrast to earlier Kronecker-based joint kernels where inputs and latent signals would be assumed independent. The kernel generalizes Wishart processes <cit.> into cross-covariances for input-dependent correlation structure, and a non-stationary Gaussian kernel <cit.> for measuring input-space correlations at specific latent signals. We deploy this kernel to extend the GPRN framework by a non-stationary cross-covariance function for the latent signals.Furthermore, the proposed latent correlation Gaussian process (LCGP) incorporates multiple latent signals that are linearly combined into multiple outputs in an input-dependent fashion. The latent signals have a structured Wishart-Gibbs model that leads to non-stationary signal variances. We account for both regression, and Probit-based classification. Finally, the model is extended for multiple observation sets, where each observation is modeled by a separate latent model with shared latent correlations. In such a model, the latent correlations effectively regularize the latent models of each observation. Variational Bayesian inference with whitened gradients is derived for a scalable implementation. We highlight the model with several datasets where interesting latent signal covariance models emerge, while retaining or improving the state-of-the-art regression and classification performance. Multi-observation classification is demonstrated on EEG data from a large set of scalp measurements from several subjects, where the model is able to learn the covariance model between the underlying brain processes. In simulation studies, we show that our model is capable of accurately learning the latent variable correlations. § LATENT CORRELATION GAUSSIAN PROCESS We consider M-dimensional observations (x) ∈^M over N data points (x_1, …, x_N). We denote vectors with boldface symbols, matrices with capital symbols and block matrices with boldface capital symbols. In this section we first construct the multi-output regression model for (x), and then develop a novel kernel for latent variables in such a model as our main contribution. Section <ref> further extends the framework into a classification setup. §.§ Multi-output regression Following <cit.>, we model the M-dimensional outputs (x) ∈^M as an input-dependent mixture of Q latent signals (̆x) ∈^Q via a mixing matrix B(x) ∈^M × Q,(x)= (x) += B(x)((̆x) + ) + ,where = (x) is zero-mean M-dimensional Gaussian observation noise and = (x) is zero-mean Q-dimensional latent noise∼(, ω_f^-1I⃗), ω_f ∼( α_f, β_f), ∼(, ω_u^-1I⃗),ω_u ∼(α_u, β_u) . We model both the latent variables $̆ as well as the elements of the mixing matrixBas Gaussian processes. A GP priorϕ(x) ∼(μ(x),k(x,x'))defines a distribution over functionsϕ(x)with expectation𝔼[ϕ(x)] = μ(x)and covariance of the values between pointsxandx'is[ϕ(x),ϕ(x')] = k(x,x'). A set of function valuesϕ⃗= (ϕ(x_1), …, ϕ(x_N))^Tfollows a Gaussianϕ⃗∼(μ, K)withK_ij = k(x_i,x_j)andμ_i = μ(x_i).The mixing matrixB(x)is anM ×Qmatrix of independent Gaussian processes over outputsmand latent signalsq,B_mq(x) ∼(0, k_b(x,x')).The kernelk_b(x,x')between two input pointsxandx'determines how mixing of latent signals into outputs evolves along the input space. For instance, with temporal data the mixing matrix allows time-dependent linear combinations of the outputs. The full model is depicted in Figure <ref>. Next, we proceed to derive a kernel for the latent variables$̆. §.§ Wishart-Gibbs Hadamard Product Kernel The latent signals u_p(x) are functions of the signal index p and input x. We propose to encode the latent signals (̆x) as mutually dependent Gaussian processes over pairs of inputs (x, x') and signals (p, q),u_p(x)∼(0, A_xx'(p,q) K_pq(x,x') ),such that the joint covariance [ u_p(x), u_q(x')] = A_xx'(p,q) K_pq(x,x') is a product of signal and input similarities. Both similarities depend on each other to produce a non-stationary joint covariance.The pairwise, mutually dependent Hadamard kernel k(x,x',p,q) = A_xx'(p,q) K_pq(x,x') encodes a rich similarity between input x of latent signal p and x' of latent signal q as the product of the two conditional kernels. The kernel A_xx'(p,q) encodes signal similarity between inputs x and x', while the kernel K_pq(x,x') denotes input similarity at latent signals p and q. Since the two kernels depend on each other, a simple model such as Kronecker kernel product <cit.> is not suitable. Both kernels can be interpreted as cross-covariances. The Gibbs kernel K_pq restricts the flexibility of the Wishart kernel A_xx' (See Figure <ref>).For instance, in EEG data the kernels could signify correlations A_tt'(p,q) between latent brain processes p and q at two time points t and t', while K_pq(t,t') is a smooth temporal kernel that connects events that occur at similar time points. In geospatial applications, the correlations A_'(p,q) can encode similarity between two latent ore functions p and q at two locations ,' ∈^2, for instance between cadmium and zinc concentrations <cit.>. The location kernel K_pq(, ') could encode a smooth spatial proximity. A conventional Kronecker kernel k(,',p,q) = A(p,q) K(,') would assume – in contrast – that (i) the same spatial proximity K(,') applies to all ore functions p, and (ii) two ore concentrations would correlate similarly independent of the location . We start forming the joint kernel by considering a non-stationary Gaussian kernel for the inputs x<cit.>,K_pq(x,x') = √(2ℓ_pℓ_q/ℓ_p^2+ℓ_q^2)exp(-(x-x')^2/ℓ_p^2+ℓ_q^2),which encodes specific lengthscales ł_1, …, ł_Q for each latent signal.The kernel within a single latent signal K_pp reduces into a standard Gaussian kernel, while the cross-covariance similarity K_pq measures similarity of two inputs with different associated lengthscales.We base our construction of the mutually dependent covariance structure A_xx'(p,q) on Wishart processes. A Generalized Wishart Process (GWP) prior on a covariance matrix, that depends on a single variable x, is <cit.>A(x)= ∑_r=1^ν L _r(x) _r(x)^T L^T ∼(V,ν,K_z),where V = LL^T and all z_pr(x) ∼(0, K_z(x,x') ) are independent Gaussian processes for p = 1, …, Q and r = 1, …, ν.The kernel K_z determines the change of A(x) in the input space. From this formulation we define our joint kernel, such that we preserve the GWP marginal for A(x) by extending the GWP into cross-covariances of two variables, asA_xx'(p,q) = _p(x)^T _q(x'),where we have for each element of _p(x) ∈^ν a GP prior (See Figure <ref>). With this choice the prior expectation of the covariance is the identity matrix.The resulting covariance of u_p(x) is then a product of covariance between inputs at signals p and q, and a covariance between signals at inputs x and x'.This covariance can be seen marginally from two perspectives, _̆p∼_N(, A(p,p) ∘ K_pp)(̆x)∼_Q(, A(x) ∘ P),where _̆p ∈^N is a single latent signal that follows a Normal distribution weighted by variances A(p,p), and (̆x) ∈^Q contains all Q latent signals at input x and follows a Normal distribution with generalized Wishart process prior, scaled by the matrix P_pq = √(2ℓ_pℓ_q/ℓ_p^2+ℓ_q^2). The element-wise, or Hadamard, product of x and y is denoted by x ∘ y.The joint covariance over the concatenated column vector of all latent signals ∈̆ℝ^QN is a block matrix(,̆)̆ = ( Z_i Z_j^T ∘ K_ij)_i,j=1^N + _u = ^T ∘_Q + _u where Z_i is a Q ×ν matrix, and = (Z_1, …, Z_N)^T. The Q × Q kernel K_ij = (K_pq(x_i,x_j))_p,q=1^Q gives the signal similarities at inputs x_i and x_j, the block matrix _Q = (K_ij)_i,j=1^N collects them into (N × N) blocks of kernel values, and finally the noise matrix is _u = ω_u^-1I⃗_QN, introducing the latent noise directly into the covariance of the $̆'s. The resulting joint input-output covariance^T ∘_Qconsists ofN ×Nblock matrices of sizeQ ×Q. See Figure <ref> for a visualisation.The kernel matrix_Qis positive semi-definite (PSD) as an outer product, and the Gaussian kernel is PSD as well <cit.>. The Hadamard product_Q ∘_Qretains this property. We refer to this kernel as the Wishart-Gibbs cross-covariance (WGCC).The proposed latent correlation Gaussian process (LCGP) model is a flexible Bayesian regression model that simultaneously infers the latent signals and their mixing to match the output processes, while learning the underlying correlation structure of the latent space using the WGCC kernel. The latent correlations are parameterised by two terms that characterise the input and signal similarities with Gaussian and Wishart functions, respectively. A key feature of the model is the ability adaptively couple and decouple latent processes along the input space. § CLASSIFICATION WITH MULTIPLE OBSERVATIONS We further suppose that we haveSobservations or samples^(s)(x)associated with a class label, or response,r^(s), and assume that all these observations share their latent space. We then learn separate latent functions^̆(s)for each sample, while keeping the mixing modelB(x), latent correlations(x)and_Q, and the noise precisionsω_fandω_ushared. The noiseless sample is then reconstructed as^(s)(x) = B(x) ^̆(s)(x),which results in the same likelihood as in eq. (<ref>).We build a classifier in the latent signal space as a Probit classification model over all latent signals^T ^̆(s)with Gaussian-Gamma priors, where∈^NQis a concatenated column vector of linear weights_p ∈^Nfor theQlatent signals. This allows us to reduce the data dimensionality for classification, asMcan potentially be very large. The classifier is then r^(s)|, ^̆(s) ∼(Φ(^T ^̆(s) + b)),_p|λ_w∼(0, λ_w^-1),λ_w ∼( α_w, β_w) b|λ_b∼(0, λ_b^-1),λ_b ∼(α_b, β_b),where we index the observations withs, andandbare the classifier weights and bias, respectively. The Gaussian CDF is denoted byΦ(·). We additionally assume, for notational clarity, that all data are observed at the same input pointsx_1, …, x_N.Essentially, our model now has two likelihoods for the two types of data, one defined for the output data in eq. (<ref>) and one for the class labels related to the outputs in eq. (<ref>).§ INFERENCE§.§ Variational BayesFor inference in our Bayesian model we adopt the Variational Bayesian (VB) approach <cit.>, which is based on maximising a lower bound on the log marginal likelihood of the data with respect to a distributionq(Θ), whereΘrepresents all model parameters. The lower bound is of an easier form than the true posterior distributionp(Θ|), where= (Y^(s),r^(s))_s=1^SandY^(s) ∈^M ×N. The lower bound is obtained by Jensen's inequalitylog p()= log∫ q(Θ) p(, Θ)/q(Θ) dΘ≥∫ q(Θ) logp(, Θ)/q(Θ) dΘ≡Ł(q) .Typically, a factorised approximationq(Θ) = ∏_i q(θ_i)is used, whereθ_iare some disjoint subsets of the variablesΘ. It can be shown that the optimal solution that maximizesŁ(q)isq(θ_i) ∝exp(⟨log p(Θ, ) ⟩_θ_-i),in which the expectation is taken with respect to all variables exceptθ_i. The VB algorithm consists of iterating through updating each factorq(θ_i). §.§ VB for the LCGP classification model We employ the following factorization q(Θ)= ∏_s q(^̆(s)) q(h^(s)) ∏_m q(_m) q(ω_f) q(,b) q(λ_w) q(λ_b) q(),whereq(_m)factorizes the mixing matrixrow-wise. Most factors have standard distributions, the update formulas are shown in Table <ref>. The VB inference procedure is summarised in Algorithm <ref>. LCGP can be run with or without the classification part of the model; without classification the parameters involved are ignored (see Figure <ref>).Auxiliary variableshare introduced to make the variational inference tractable for Probit classification <cit.>,h |, ∼̆(^T+̆ b, 1).Class labels depend on the sign ofh, i.e.r = +1ifh > 0. Integrating outhrecovers the Probit likelihood p(r|,)̆ = ( r | Φ(^T+̆b)).The posteriorq(h)is a truncated Gaussian <cit.>, which has analytical formulas for first and second moments.Finally,q()is updated by optimising the lower boundŁ()with respect to. We optimiseŁ()using L-BFGS in whitened domain employing a change of variables= L⃗^-1with the Cholesky of the kernel_z = L⃗ L⃗^Tto make the optimization more efficient <cit.>, see the Supplementary for details.Predictions to new inputsx^*can be made by applying standard GP formulas to obtain(x^*),(̆x^*)and(x^*)based on the optimized variational posteriorq(Θ). For new observation with unkown class labelr^(s^*), we can apply the update forq(^̆(s^*))without classification related terms. § RELATED WORKIn semiparametric latent factor models (SLFM) the signal(x) = B (̆x)overMoutputs is a linear combination ofQindependent latent Gaussian process signals(̆x)with a fixed mixing matrixB ∈^M ×Q, with appropriate hyperparameter learning <cit.>. A Gaussian process regression network (GPRN) <cit.> extends this model by considering a mixing matrixB(x)where each elementB_pi(x)is an independent Gaussian process alongx.In geostatistics vector-valued regression with Gaussian processes is called cokriging<cit.>. In linear coregionalization models (LCM) latent Gaussian processes are mixed from latent signalsu_p(x)andu_q(x')that are independent. In contrast to SLFM, each signalu_p(x)is an additional mixture ofR_Qsignals with separate shared covariancesK_q(x,x'). In the intrinsic coregionalization model (ICM) only a single (Q=1) latent mixture with a single shared kernel is used, while in SLFM there are multiple latent singleton (R_Q=1) signals. In spatially varying LCMs (SVLCM) the mixing matrices are input-dependent, similar to GPRNs <cit.>. <cit.> used non-orthogonal latent signalsu_p(x)andu_q(x')with fixed covariances.Multi-task Gaussian processes employ structured covariances that combine a task covariance with an input covariance. Simple Kronecker products between the covariances assume that task and input covariances are independent functions<cit.>. This is computationally efficient <cit.>, but it does not take into account interactions between the tasks and inputs.In Generalised Wishart Processes <cit.> an input-dependent covariance matrixΣ(x) = ∑_n=1^ν_n(x) _n(x)^Tis a sum ofνouter products.The random variablesz_ni(x) ∼(0, K(x,x'))are all independent Gaussian processes. Copula processes also describe dependencies of random variables by Gaussian processes <cit.>. In Bayesian nonparametric covariance regression, covariances of multiple predictors share a common dictionary of Gaussian processes <cit.>.Finally, Gaussian process dynamical or state-space systems are a general class of discrete-time state-space models that combine the latent state into time-dependent outputs as Markov processes <cit.>. In Gaussian process factor analysis the outputs are described as factors that have GP priors, however not modeling the factor dependencies <cit.>.§ EXPERIMENTS In the first experiment we show that our model[Our Matlab implementation can be found on <https://github.com/sremes/wishart-gibbs-kernel>] can recover the true latent correlations in a simple simulated-data case, and compare our method with GPRNs, which is a state-of-the-art multi-output Gaussian process regression model <cit.>. We employ the mean-field variational inference implementation of GPRN by <cit.>. Second, we apply our method to the Jura geospatial dataset to elucidate latent ore concentration process couplings. Finally, we demonstrate our full modelling framework on an EEG single-trial classification task, outperforming state-of-the-art regularised LDA in classification and additionally recovering an interesting latent representation that we further evaluate in a simulation study. Results from the experiments are summarised in Table <ref>. §.§ Simulated Data Experiments §.§.§ Wishart-Gibbs kernel in multi-output GP We simulated a dataset that contains a clear switch in the coupling of three outputs in the middle of an interval[-1, 1]. A traditional Kronecker multi-output kernel of form= A ⊗K, withA = ∑_k _k_k^TandKa Gaussian kernel, cannot model this, but our proposed Wishart-Gibbs kernel can adapt to this switch point. The data and posterior fit with both kernels are shown in Figure <ref>. Our kernel obtains an MSE of 0.44, and Kronecker kernel 0.60, with a baseline of 1.92 (predicting zero).§.§.§ Recovering latent covariance with LCGPWe use simple toy data to show that we are able to recover known latent correlation structure. We generated data with a varying number of latent componentsQ = 2, …, 5and amount of samplesS = 1, …, 20. The mixing matrix was binary such that one output maps to one latent variable. For simplicity, we only consider LCGP without the classifier.To assess the accuracy, we measured the correlation between the elements of the true covariance matrix to the one estimated. With GPRN we computed the empirical covarianceΣ̂ = ∑_s ^̆(s)^̆(s)^Tof the latent variables. As the order of the recovered latent variables is not identifiable, we computed the correlations over all permutations and report the best. Rotations of the latent space are not accounted for, however. The results inFigure <ref> show that our model can recover the true underlying latent covariance with high correlations.§.§ Jura The Jura dataset[Data available at <https://sites.google.com/site/goovaertspierre/pierregoovaertswebsite/publications/book>.]consists of measurements of cadmium, nickel and zinc concentrations in a region of the Swiss Jura <cit.>. For training we are given the concentrations measured at 259 locations and for validation the measurements at 100 additional locations. We set hyperparameters for both our model and GPRN asℓ_u = 0.5andℓ_b = 1,and for our model the parameter for the latent correlation lengthscale toℓ_z = 1. We learned the models withQ=2latent variables, which resulted in the best model performance. We report both the mean squared and absolute errors for the predicted concentrationsin Table <ref>. Our model performs at the same level as the state-of-the-art competitor GPRN, with slightly better performance in absolute errors.Figure <ref> shows the inferred model. The latent variables are 2D spatial surfaces on which the measurement points are indicated as black points. The two latent variables learn different geological processes that have an interesting two-pronged correlation pattern that indicates two kinds of negative correlations (the scatter plot). By explicitly modelling the latent covariance, we are able to see the regions of the input space that contribute to this pattern; the latent covariances indicate the combined covarianceC_pq(,') = A_'(p,q) K_pq(, '). The diagonal plots of Figure <ref>a show the variances of the two latent signals, while the off-diagonal covariance plot indicates the two-pronged negative correlation model between the geological processes. Finally, the mixing matrices of the two latent components reconstruct the three ore observation surfaces.§.§ EEG Our main motivation for developing the present model was in modelling EEG data.We demonstrate LCGP on data from a P300 study <cit.>, where the subjects were shown either a target or non-target stimulus, specifically a green or a red LEDflashing, respectively. The classification task is to classify the stimulus based on the brain measurements. Additionally, we evaluate our modelling approach in a simulation study.§.§.§ Classification Results We evaluate the classification performance using a Monte Carlo cross-validation scheme where in each fold we randomly sample training and test sets ofS=1000trials from the full dataset consisting in total of 7351 trials from 16 subjects. A single trial is the continuous voltage measurement ofM=19channels in an EEG cap for 800ms withN=89after filtering and downsampling the time series <cit.>. We report the average area under the ROC curve (AUC) statistic over 100 folds, and compare our method to the state-of-the-art regularised LDA method implemented in the BBCI toolbox <cit.>. Results in Table <ref> show that our method performs better than RLDA (p<0.05).An example visualization of the model from one of the cross-validation folds is depicted in Figure <ref> for the three first latent signals. Panel (a) indicates the shared variances and covariances of the latent signals along time. The first and third latent signals have a monotonically increasing covariance coupling, while the first and second latent signals have a periodicity in the covariance. The average latent variables of the target and non-target trials are shown in panel (b). The third latent variable captures a strong dynamic between time points[0.3, 0.5], which coincides with the expected P300 activity approximately 300 ms after the stimulus representation. The first two variables show peaks also at approximately 300 ms.In general the positive trials have a remarkably different latent representations than the negative trials. Panel (c) shows the classifier weights_pfor the three latent signals with the average classification plotted. Finally, a subset of the EEG channels are shown in panel (d), highlighting the differences in the channel dynamics. In panel (e) the components are found to be discriminative also when plotted on the scalp map.§.§.§ Finding the Latent Correlations In addition to the classification results, we evaluated our modelling approach in a simulation study to test whether we can find the latent correlations correctly using data that resembles the real EEG as closely as possible, but where we know the ground truth. To this end, we simulated datasets from the fitted models, and learn a new model on the simulated data. We repeated this for varying number of latent variables (Q = 2,3,4) with 10 simulations done for each value ofQ. For each simulation we computed the empiricalp-value with the hypothesis that the accuracy of our model is greater than using randomly simulated covariances from the model. Thep-values from the simulations were combined using the Fisher's method <cit.>, with results reported in Table <ref>. We use again the correlation-based score for assessing the accuracy as with the toy data case.§ DISCUSSION The LCGP is a flexible framework for multi-task learning. We demonstrate in two experiments that our model can robustly learn the latent variable correlations. The model also achieves state-of-the-art performance in both regression and classification. The added modeling of the correlations of the underlying latent processes both improves model interpretability, and regularises the model especially with multiple observations. The novel Wishart-Gibbs cross-covariance kernel encodes mutually-dependent covariances between latent signals and inputs in a parameterised way without being too flexible.In place of the non-stationary Gaussian kernel other non-stationary kernels are possible. <cit.> propose a class of non-stationary convolution kernels containing, for instance, a non-stationary Matérn kernel. For future work coupling the spectral kernels <cit.> with Wishart correlations is another highly interesting avenue for a general family of dependent, structured kernels. The mutually-dependent Hadamard kernel would also be interesting to study in context of structured multi-task learning to model dependent input-output relations. This work was supported by the Finnish Funding Agency for Innovation (project Re:Know) and Academy of Finland (COIN CoE, and grants 299915, 294238 and 292334). The research leading to this results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no 611570. We acknowledge the computational resources provided by the Aalto Science-IT project.§ OPTIMISATION OF KERNEL PARAMETERS The factorq()corresponding to the GP parameters of the proposed Hadamard kernel is updated by finding point estimates by maximizing the variational lower bound. The relevant part of the bound is given byŁ()=∑_s ⟨log p(^̆(s)|) ⟩ + log p() = -(Slog|| + ∑_s ⟨^̆(s)T^-1^̆(s)⟩ +( ^T_z^-1 ))where= ^T ∘_Q + , and its gradient byŁ_ij = ( [ ^-1(∑_s ⟨^̆(s)^̆(s)T⟩)^-1 - S^-1] _ij) -[_z^-1]_ijwhere_z = K_z ⊗I_Qis a block matrix of full size(QN ×QN), and_ij = (^T ∘_Q + )_ij = (1⃗_ij^T + 1⃗_ij^T) ∘_Q.The cost function in the whitened domain can be evaluated asŁ(L⃗ )and gradient as Ł = L = L⃗^T Ł .We can similarly optimize noise precisionω_uand lengthscalesℓ_u.
http://arxiv.org/abs/1702.08402v2
{ "authors": [ "Sami Remes", "Markus Heinonen", "Samuel Kaski" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170227174846", "title": "A Mutually-Dependent Hadamard Kernel for Modelling Latent Variable Couplings" }
DeepNAT: Deep Convolutional Neural Network for Segmenting Neuroanatomy Martin Reuter^b,c, Tassilo Klein^d December 30, 2023 ====================================================================== In this paper we show a variant of colorful Tverberg's theorem which is valid in any matroid: Let S be a sequence of non-loops in a matroid M of finite rank m with closure operator . Suppose that S is colored in such a way that the first color does not appear more than r-times and each other color appears at most (r-1)-times. Then S can be partitioned into r rainbow subsequences S_1,…, S_r such that ∅⊊ S_1⊆ S_2⊆…⊆ S_r. In particular, ∅≠⋂_i=1^rS_i. A subsequence is called rainbow if it contains each color at most once.The conclusion of our theorem is weaker than the conclusion of the original Tverberg's theorem in ^d,which states that ⋂ S_i≠∅, whereas we only claim that ⋂ S_i≠∅. On the other hand, our theorem strengthens the Tverberg's theorem in several other ways: [label=*)]* it is applicable to any matroid (whereas Tverberg's theorem can only be used in ^d), * instead of ⋂ S_i≠∅ we havethe stronger condition ∅⊊ S_1⊆ S_2⊆…⊆ S_r, and* we add color constraints that are even stronger than the color constraints in the colorful version of Tverberg's theorem. Recently, we used the first property and applied the non-colorful version of this theorem to homology groupswith GF(p) coefficients to obtain several non-embeddability results, for details we refer to <cit.>. § INTRODUCTIONTverberg's theorem <cit.> states that given (d+1)(r-1)+1 points[We allow repetitions among these points.]in ^d, it is possible to split these points into r sets S_1,…, S_r with intersecting convex hulls, that is with⋂ S_i≠∅.If one replaces convex hulls with affine hulls, one obtains a valid statement (Lemma <ref>), which has the advantage of being applicable to any field <cit.>. Lemma <ref> is also easier to prove than the original Tverberg's theorem.Since the proof only uses properties of closure operators, the statement does generalize to any matroid (Lemma <ref>). In both these cases the conclusion can be strengthened a bit: instead of S_1∩… S_r≠∅, one can require ∅⊊ S_1⊆ S_2⊆…⊆ S_r.In this paper we study the variant of Tverberg's theorem for matroidal closures and show that it allows a colorful version – a generalization where the original points are colored and one furthermore requires that no resulting set S_i, i=1,…, r contains two or more points of the same color.While the version without colors is straightforward <cit.>the proof of the colorful version is more subtle.Moreover, our proof method yields an efficient algorithm that finds the required sets in polynomial time.§.§ TerminologyBefore we state our results formally, let us introduce some notations and terminology which will allow us to nicely present the statements and proofs. We assume that the reader is acquainted with the basic matroid theory. We always use the symbols r and m to denote non-negative integers. We use the symbols , ,andfor matroidal closure, affine closure, convex hull and rank function, respectively.If M is a set, we consider a sequence S=(m_i)_i∈ I of elements from M as a set of pairs {(i,m_i)| i∈ I}. With this convention we can use the set theoretic terminology for sequences:|S| is the length of the sequence, S'⊆ S means that S' is a subsequence of S, we know what it means for two subsequences to be disjoint, we can use the operation S∖ S' of (sequence) difference, etc.If S={(i,m_i)| i∈ I} is a sequence and we need to refer to the set {m_i| i∈ I}, we use the symbol S^set.If Ψ is a map from the subsets of M (for example a closure operator, rank function), and S=(m_i)_i∈ I is a sequence in M, we use a shorthand Ψ(S):=Ψ(S^set). To make formulas and equations shorter, we leave out the parantheses after the operators , ,andwhen there is no danger of confusion.A coloring of a sequence S={(i,m_i)| i∈ I} is any map c S→ C into some set C of colors, that is, c assigns to each pair (i,m_i) a color from C. The sequence S is rainbow with respect to c, if the restriction of c to S is injective. §.§ Main results Let us first state the non-colorful variant of Tverberg's theorem for affine hulls and its easy generalization to matroidal closures.Let S be a sequence of points in an affine space 𝔸of dimension d. If[We do not require d to be finite,therefore the slightly unusual formulation.] |S|>(d+1)(r-1), then there exist r pairwise disjoint subsequences S_1,…, S_r of S with ⋂_i=1^rS_i≠∅.In fact, there are r pairwise disjoint subsequences S_1,…, S_r satisfying ∅≠ S_1⊆ S_2⊆…⊆ S_r. Let M be a (finitary[Finitary matroids are generalizationof matroids to not necessary finite ground sets. They add the following axiom to the usual axioms for finite matroids:If y∈(X), then there exists a finite set X'⊆ X such that y∈(X'). With these addition,such terms as rank or basis can be correctly defined.]) matroid of rank m with closure operatorand S be a sequence of points in M with |S|>m(r-1).Then there exist r pairwise disjoint subsequences S_1,…, S_r of Ssatisfying ∅⊊ S_1⊆ S_2⊆…⊆ S_r. In <cit.> we only stated that there exists sets S_i with ∅≠⋂ S_i. However, the proof there implies Lemma <ref>, and (if one replaceswith the closure operatorof a matroid) Lemma <ref>. In the case of matroids of finite rank, both lemmas can also be obtain as a direct consequence of Theorem <ref>.In <cit.> we applied Lemma <ref> to homology groups over finite fields. This enabled us to prove some inequalities for simplicial complexes embeddable into various manifolds. Our colorful matrodial Tverberg (Theorem <ref>) provides a control of the resulting sets, which enables us to further improve the bounds from <cit.>. For the details of the improvement, see the author's thesis <cit.>.We are now ready to state the main results of this paper.Let M be a matroid of a finite rank m and S be a sequence of non-loops in M colored by some colors in such a way that at most m elements of S are colored by the first color,at most m-1 by the second color, at most m-1 by the third color, etc.If |S|>m(r-1), then there exist r pairwise disjoint rainbow subsequences S_1,…, S_r of S,such that ∅⊊ S_1⊆ S_2⊆…⊆ S_r.Furthermore, if the time required to decide whether a point x∈ M lies in the closure of a set Y⊆ M is bounded by u,then the subsequences S_1,…, S_r can be found in time polynomial in r, m, u and |S|. In the proof of Theorem <ref> we encounter another version of colorful matroidal Tverberg's theorem.Let M be a matroid of a finite rank m and S a sequence of non-loops in Mcolored by m colors in such a way that at least r elements of S are colored by the first color,at least r-1 by the second color, at least r-1 by the third, …, at least r-1 by the mth color. Then there exist r pairwise disjoint rainbow subsequences S_1,…, S_r of Ssuch that ∅⊊ S_1⊆ S_2⊆…⊆ S_r.Furthermore, if the time required to decide whether a point x∈ M lies in the closure of a set Y⊆ M is bounded by u,then the subsequences S_1,…, S_r can by found in time polynomial in r, m, u and |S|.Note the different conditions on the number of points of each color. In Theorem <ref> these conditions are used to ensure that we have enough colors.In Theorem <ref> we already have the right number of colors, but the conditions ensure that the length of S is sufficient.Moreover, these results are tight: Lemma <ref>, Lemma <ref>, Theorem <ref>and Theorem <ref> are sharp. To be precise, for any r and any matroid M of rank m thereexists a sequence S of non-loops in M with |S|=m(r-1) such that any division of S intor disjoint subsequences S_1,…, S_r satisfies ⋂ S_i =∅. Tverberg-type theorems in ^dLet us now compare our main results with the related theorems valid in ^d.In this section Δ_n denotes the n-dimensional simplex.Tverberg's theorem can be stated as follows: If fΔ_(d+1)(r-1)→^d is an affine map, there are r pairwise disjoint faces σ_1,…, σ_r of Δ_(d+1)(r-1) with ⋂_i=1^r f(σ_i)≠∅. This is the reason whyTverberg's theorem is also called affine Tverberg's theorem. To avoid confusion, we have decided not to use the name “affine Tverberg” for Lemma <ref>.If r is a prime power, Özaydin <cit.> showed that the same result holds for an arbitrary continuous map f. The statement is known as topological Tverberg. It was a long-standing open problem, whether topological Tverberg can be extended to other values of r. The negative answer came in 2015, when Frick (based on the previous work of Mabillard and Wagner <cit.>) constructed first counterexamples <cit.>. Counterexamples for other values of d and r followed shortly afterwards. <cit.>If r is a prime, there is a colorful version of (topological) Tverberg's theorem <cit.> as well: Suppose that the vertices of K=Δ_(d+1)(r-1) are colored in such a way, that no color is used more than (r-1)-times.Then for every continuous map fΔ_(d+1)(r-1)→^d, there are r pairwise disjoint rainbow[Containing each color at most once.] faces σ_1,…, σ_r of Δ_(d+1)(r-1) with ⋂_i=1^r f(σ_i)≠∅.The colorful version provides more control over the resulting sets σ_1,…,σ_r. Even if f is an affine map, the only known proof uses topological methods and needs the assumption that r is prime. Whether this assumption can be relaxed in the affine situation is an open question. Moreover, the topological proof does not provide any way how to find the pairwise disjoint faces, it merely shows their existence.We see that Theorem <ref> does not require r to be a prime number, it relaxes the conditions on the colors from topological version a bit and provides an efficient algorithm for finding the desired sets.We also note that Bárány, Kalai and Meshulam proved another, very differentTverberg Type Theorem for Matroids <cit.>, they considered continuous maps from the matroidal complex and showed the following: If b(M) denotes the maximal number of disjoint bases in a matroid M of rank d+1, then for any continuous map f from the matroidal complex M into ^d there exists t≥√(b(M))/4 disjoint independent sets σ_1,…, σ_t∈ M such that ⋂_i=1^tf(σ_i)≠∅.§ TIGHTNESSWe postpone the technical proofs of our main results, Theorems <ref> and <ref> to the end of the paper. First we prove Proposition <ref> showing their tightness.The proof is a variant of the standard construction for showing that Tverberg's theorem is tight.We start with an auxiliary lemma.Let M be a matroid with finite basis B. Then for any two sets U,V⊆ B(U)∩(V) =(U∩ V).Since the operatoris monotone, the inclusion (U∩ V)⊆(U)∩(V) is obvious. Let us now prove the opposite inclusion.Let x∈(U)∩(V) be an arbitrary element. We want to show that x∈(U∩ V). If x is a loop, x∈∅⊆(U∩ V). So assume that x is not a loop.Let U'⊆ U and V'⊆ V be inclusion minimal subsets with x∈(U') and x∈(V'), respectively. Since we assume that x is not a loop, U'≠∅≠ V'.We will show by contradiction that U'=V', hence proving the claim. If U'≠ V', we may up to symmetry assume that there is an element u'∈ U' which does not lie in V'. From the inclusion minimality of U' follows that x∈((U'∖{u'})∪{u'})∖(U'∖{u'}).The exchange principle yields u'∈(U'∖{u'}∪{x}). Similarly v'∈(V'∖{v'}∪{x}) for an arbitrary v'∈ V'.The set U'∪ V' is independent being a subset of a basis B. By construction (U'∖{u'}∪ V'∖{v'}∪{x})=(U'∪ V'). Comparing the ranks of both sides and using the fact that u'∉ V', we see that v' has to belong to U'.Since v' was arbitrary, this implies V'⊊ U' – in contradiction with U' being minimal with x∈(U').We can now finally prove Proposition <ref>. Let B=(e_1,…,e_m) be a basis of the matroid M.It suffices to takeS=e_1,e_1,…, e_1_(r-1)×, e_2,e_2,…, e_2_(r-1)×,…, e_m,e_m,…, e_m_(r-1)×. Let S_1,…, S_r be disjoint subsequences of S.Then ⋂_i=1^r (S_i) =(⋂_i=1^r S_i^set)=(∅), where the first equality follows by inductive application of Lemma <ref> and the second equality uses the fact that each element e_j is missing in at least one sequence S_i. We also note that the assumption in Theorem <ref> that there are at most r points of the first color is necessary. Otherwise, one can consider the sequence S=(1,2,3,4,…, n, n+1) in ^1 where the first n elements are red and the last element n+1 is blue. Then although the length of S can be arbitrary, there are no three disjoint rainbow subsequences S_1, S_2, S_3 withS_1∩ S_2∩ S_3≠∅. On the other hand, it is not true that this condition is necessary in every matroid.For example, consider the affine line over the field with two elements.§ THE PROOFWe begin the proof by showing that Theorem <ref> implies Theorem <ref>. The reduction of Theorem <ref> to Theorem <ref>follows a well known pattern, a similar reduction previously appeared in the proofof the optimal colored Tverberg theorem <cit.> orin Sarkaria's proof for the prime power Tverberg theorem <cit.>,see also de Longueville's exposition <cit.>. Nevertheless, there are subtle differences because we are working in greater generality and because we need to take algorithmic aspects into consideration. Assume that the assumptions of Theorem <ref> are satisfied. We show how to turn the sequence S and the matroid M with closure operatorinto a sequence S' and matroid M' with closure operator ' that satisfy the assumptions of Theorem <ref>. Moreover, we construct S',M',' and the coloring of S' in a such way thatthe sets S_1:=S_1'∩ S, S_2:=S_2'∩ S, …, S_r:=S_r'∩ S will satisfy ∅⊊(S_1)⊆(S_2)⊆…⊆(S_r) iff and only if '∅⊊'(S'_1)⊆'(S'_2)⊆…⊆'(S'_r) and the rainbowness of S_i' will imply that S_i is rainbow. Let m be the rank of M and d the number of colors used in S. From the conditions follows that d-m≥ 0.If the length of S is strictly larger than m(r-1)+1,we throw the superfluous elements of S away.This does not add a point of any color, therefore all assumptions of Theorem <ref> remain preserved.So we may assume that the length of S is precisely m(r-1)+1.We form M' from M by adding (d-m) new coloops[A coloop is an element x that is independent on any set that does not contain x. In other words, we form M' as the direct sum of M with the uniform matroid U_d-m^d-m.] x_1,…, x_d-m.Now we form the sequence S' by appending (x_1,x_1, …,x_1_(r-1)×, x_2,x_2, …,x_2_(r-1)×,…x_d-m, …, x_d-m_(r-1)) to S.Clearly we can color the new elements of S' so that in total there are exactly r points of the first color, and exactly r-1 points of every other color.We see that S', M' satisfy the assumptions of Theorem <ref>. It follows that there are r rainbow subsequences S'_1,…, S'_r of S' satisfying '∅⊊' S'_1⊆' S'_2⊆…⊆ S'_r.Since the points x_i are coloops and since each one of them was added exactly (r-1)-times, it follows that they cannot contribute to ⋂_i=1^r ' (S'_1). Consequently, ⋂_i=1^r (S∩ S'_i)≠∅ and ∅⊊(S∩ S'_1)⊆(S∩ S'_2)⊆…⊆(S∩ S'_r).We conclude that S_1:=S∩ S_1', S_2:=S∩ S_2', …, S_r:=S∩ S_r' are the required subsequences of S.Observe that the reduction is polynomial in r, m, u and |S|.Now we can start with the proof of Theorem <ref>. Here we describe the main idea. We let S_r be a rainbow independent subsequence of the maximal rank. In an ideal case (S_r)=M and we may obtain the remainingsubsequences S_1,…, S_r-1 by apply induction on the sequence S∖ S_r inside M.However, we may be unlucky. It may happen that no such S_r satisfies (S_r)=M, see Fig. <ref>. We see that in this case we could simply take the subsequence S'=(a_1,a_4,a_5,a_6,a_7) and unify colors blue and red into one color (say violet). Then S' lives in a submatroid of rank 1 and satisfies the conditions of Theorem <ref>, so we may use induction. We obtain subsequences S_1,S_2,S_3 of S' satisfying ∅⊊(S_1)⊆(S_2)⊆(S_3). These are clearly also subsequences of S. Moreover they are not only rainbow in the violet-orange coloring, but also in the original blue-orange-red coloring. In the proof we show that if (S_r)≠ M, we may always resolve the situation by an analogous trick. Let us now carry out the technical details. Since we promised an algorithmic solution, we describe an algorithm that finds the desired subsequences. First we compute m'= S. Since instead of S we can considerthe subsequence S' formed by the elementscolored by the first m' colors (while preserving all assumptions of Theorem <ref>), we may assume that M=(S).Now we find an inclusion maximal independent rainbow subsequence RI_r of S.This can clearly be done in time polynomial in r, u, m and |S|.We will proceed in the proof by induction on the triple (r,m,m- RI_r) (in lexicographical ordering).If r=1 or m=1 the statement is trivial, so assume r>1, m>1.If m- RI_r=0, then (RI_r)=M.Because RI_r is rainbow, S∖ RI_r and M satisfy the assumptions of Theorem <ref>for r'=r-1 . By applying induction we obtain r-1 disjoint rainbow subsequences S_1,…, S_r-1 of S∖ RI_rwith ∅⊊(S_1)⊆(S_2)⊆…⊆(S_r-1).If we now set S_r=RI_r we see that S_1,…, S_r are the desired disjoint rainbow subsequenceswith ∅⊊(S_1)⊆(S_2)⊆…⊆(S_r-1)⊆(S_r).Therefore we may assume that(RI_r)⊊ MWe would like to increase RI_r by adding a point of a colorthat is not yet used in RI_r. Unfortunately, this is not possible without replacing somepoints of RI_r first. Our algorithm uses a cycle to find out which points to replace and how.Within the cycle we need to keep track of “replacement rules” which makes this part a bit technical.Moreover, there are three possibilities what can occur at one iteration of the cycle:[label=*)] * either we construct a larger independent rainbow set RI_r,* we find the desired sets S_1,…, S_r in a smaller submatroid, or * we adjust the replacement rules. The cycleIn the kth step (k=0,1,2,…) of the cycle the replacement rules consist of the following data:* set K_k of colors (this set corresponds to colors that we may use while replacing some points),* subsequence I_k of RI_r (eventually we would like to replace the subsequence I_k of RI_r by another sequence I_k^p),* for each element p whose color is in K_k and which does not lie in (I_k) a subsequence I_k^p of S (we want to replace I_k with I_k^p, hence increasing the length of our subsequence by one)To simplify the terminology, if T is a subsequence of S,let c(T) denote the set of all the colors used by elements of T.If U is a set of colors, let C_U be the subsequence of Sformed by all elements with color from U. We want the data to satisfy the following conditions:* c(I_k)⊊ K_k, * c(I_k^p)=c(I_k)∪{c_k^p} for somec_k^p∈ K_k∖ c(RI_r), * |I_k^p|=|I_k|+1, * p∈ I_k^p and (I_k^p∖{p})=(I_k) * RI_r∩ C_K_k=I_k and K_k⊈c(RI_r)Note that conditions <ref> and <ref> imply that I_k^p only contains elements that have the same colors as points in I_k plus one additional point that has color c_k^p, which is not yet present in RI_r.The first step (k=0) is easy. We set I_0:=∅ and let K_0 be all the colors of S except for those already used in RI_r.No element p∈ C_K_0 is contained in[C_K_0 are the elements of S whose color lies in K_0 and we assume that S contains only non-loop elements.] (I_0)=∅, so we need to define the set I_0^p for every such p. We simply put I_0^p:={p}.Now we check that the above defined sets satisfy all the prescribed conditions. Note that by (<ref>), S⊈(RI_r).This together with the fact that RI_r is independent implies that |RI_r|<m. Since we have m colors, there is a color that is not used in RI_r.In other words, K_0 is nonempty.Hence conditions <ref>–<ref> are satisfied trivially(with c_k^p=c(p) in condition <ref>). So suppose that the sets K_k, I_k and I_k^p are already constructed.Since I_k⊆ RI_r there are three cases that may occur:* C_K_k⊆(I_k), * C_K_k⊈(RI_r) or * C_K_k⊆(RI_r) and C_K_k⊈(I_k). We deal with the particular cases separately: §.§.§ Case <ref>: C_K_k⊆(I_k)In this case, we may apply the trick we used for Fig. <ref>. Let us describe it formally.We set M':= I_k and m':=(I_k). M has rank m and by (<ref>)we know that M⊈(RI_r). It follows that (RI_r)<m and since I_k⊆ RI_r, we also have m'=(I_k)<m. Condition <ref> implies c(I_k)⊊ K_k, so there is a point p∈ C_K_k∖ C_c(I_k).Because I_k is rainbow and independent and I_k=m', c(I_k) has m' distinct elements, say k_1,…, k_m'. We define S':=C_{k_1,…, k_m'}∪{p}. In S' we recolor p and all points of color k_1 by a new color z.Because S'⊆ C_K_k (we evaluate C_K_k with respect to the original coloring),the assumption C_K_k⊆(I_k) (Case <ref>) implies that S' is a sequence of elements from M'. Also in S' there are m' colors, at least r elements of color z and at least r-1 elements of all the remaining colors.Therefore, the assumptions of Theorem <ref> are satisfied for m'<m. By induction we obtain the desired disjoint rainbow subsequences S_1,…, S_r of S' (which itself is a subsequence of S)with ∅⊊(S_1)⊆(S_2)⊆…⊆(S_r). These subsequences are rainbow with respect to the new coloring of S'. By the construction of the new coloring these subsequences are also rainbow in the original coloring of S.§.§.§ Case <ref>: C_K_k⊈(RI_r)In this case, we construct a new independent rainbow subsequence RI_r' with |RI_r'|=|RI_r|+1: We pick a point p∈ C_K_k with p∉(RI_r)and set RI_r':=(RI_r∖ I_k)∪ I_k^p.Before we show that such RI_r' is a rainbow independent subsequenceof size |RI_r|+1, we provethe following auxiliary equality:(RI_r') = (RI_r∪{p}). Indeed,(RI_r')= ((RI_r∖ I_k)∪ I_k^p) = ((RI_r∖ I_k)∪ (I_k^p∖{p})∪{p}),where the last equality uses the fact that p∈ I_k^p from condition <ref>. Because any closure operatorsatisfies (B∪ C) = (B∪ C)for any two sets B,C⊆ M,we may rewrite the expression further to(RI_r') = ((RI_r∖ I_k)∪(I_k^p∖{p})∪{p}).By condition <ref> (I_k^p∖{p})=(I_k), which reduces the equality to:(RI_r') =((RI_r∖ I_k)∪(I_k)∪{p}).Using (<ref>) again, we obtain(RI_r')= ((RI_r∖ I_k)∪ I_k∪{p})Since I_k⊆ RI_r, Equation (<ref>) follows.Using the fact that I_k⊆ RI_r,we are now ready to verify that RI_r' is a rainbow independent subsequence with |RI_r'|=|RI_r|+1. * |RI_r'|=|RI_r|+1: |RI_r'| = |(RI_r∖ I_k)∪ I_k^p|. Because RI_r is rainbow, condition[c(I_k^p)= c(I_k) ∪{c_k^p}, for some c_k^p∈ K_k∖ c(RI_r)⊆ K_k∖ c(I_k)] <ref> implies that the sequences RI_r∖ I_k and I_k^p do not share any color. In particular, they are disjoint and |RI_r'|=|RI_r∖ I_k| + |I_k^p|. Since |I_k^p|=|I_k|+1 (condition <ref>), |RI_r'| = |RI_r∖ I_k| + |I_k| + 1. Because I_k⊆ RI_r, we have |RI_r'|= |RI_r| + 1.* RI_r' is rainbow: I_k^p contains one element of color that is not used in RI_r, otherwise it uses the same colors as I_k. Because RI_r'=(RI_r∖ I_k)∪ I_k^p, we see that P'_r uses exactly |RI_r|+1 colors. This, together with the previous item, yields that P'_r is rainbow.* RI_r' is independent: From the equality (<ref>) we get (RI_r') = (RI_r ∪{p}). Moreover, we have chosen a point p which satisfies p∉(RI_r), so (RI_r') =RI_r + 1. Since RI_r was independent and RI_r' has exactly one element more, the independence of RI_r' follows.Let RI_r” be an inclusion maximal independent rainbow subsequence of S that contains RI_r'. We may now start our algorithm again but this time we replace the maximal independent rainbow subset RI_r by RI_r”. We have decreased the quantity (m- C_r) and preserved m and r. By induction we obtain the desired disjoint rainbow subsequences S_1,…, S_r with ∅⊊ S_1 ⊆ S_2⊆…⊆ S_r. §.§.§ Case <ref>: C_K_k⊆(RI_r) and C_K_k⊈(I_k)In this case, we show how to construct sets K_k+1, I_k+1 andfor every p∈ C_K_k+1 with p∉(I_k+1) we construct a subsequence I_k+1^p.We choose I_k+1 to be any inclusion minimal subsequence I_k+1⊆ RI_r satisfying C_K_k⊆ I_k+1.Because we assume that C_K_k⊆(RI_r), such set I_k+1 does exist. We further define K_k+1:=K_k∪ c(I_k+1). Before we construct I_k+1^p, we prove the following auxiliary claim:I_k⊊ I_k+1and I_k⊊ I_k+1.By condition <ref>, I_k⊊ C_K_k. By Eq. (<ref>), we have I_k ⊆ I_k+1. By construction both I_k and I_k+1 are subsequences of the independent sequenceRI_r which together with the preceding yields I_k⊆ I_k+1.Condition <ref> and the fact that we are in case <ref> yields I_k⊆ C_K_k⊈ I_k. Since also C_K_k⊆ I_k+1, we see that I_k+1≠ I_k and I_k+1≠ I_k. Now we construct sets I_k+1^p for all points p∈ C_K_k+1 satisfying p∉ I_k+1. Let p be such a point. By definition of I_k+1, C_K_k⊆ I_k+1, so p cannot lie in C_K_k. Equation (<ref>) implies c(p)∈(K_k+1∖ K_k) ⊆ c(I_k+1).Because I_k+1⊆ RI_r is a rainbow set[RI_r is rainbow!], there exists a unique element r∈ I_k+1 with c(r)=c(p). Since we assume p∉ C_K_k, we have c(r)=c(p)∉ K_k⊇ c(I_k), where the last inclusion follows from condition <ref>. In particular, c(r)∉ c(I_k), hencer∈ I_k+1∖ I_k.Since I_k+1 is an inclusion minimal subsequence of RI_r for which C_K_k⊆ I_k+1, there exists an element q∈ C_K_k such thatq∉(I_k+1∖{r}).Since q∈ C_K_k⊆ I_k+1, the exchange principle implies r∈((I_k+1∖{r})∪{q}).It easily follows thatI_k+1 = ((I_k+1∖{r})∪{q}). Claim <ref> together with (<ref>) imply that I_k⊆ I_k+1∖{r}. Since q was chosen to satisfy q∉(I_k+1∖{r}), we have q∉ I_k as well. Together with q∈ C_K_k, this implies that I_k^q is defined. We set[We note that I_k+1^p does depend on the choice of q, i.e., if we choose another q∈ C_K_k that satisfies q∉(I_k+1∖{r}), we obtain a different set I_k+1^p.]I_k+1^p:=I_k+1∖(I_k∪{r})∪ I_k^q∪{p}.It remains to show that our assignment satisfies conditions <ref>–<ref>. * Condition <ref>: By (<ref>), we have c(I_k+1)⊆ K_k+1. Condition <ref> implies that K_k contains a color that is not used in RI_r and since I_k+1⊆ RI_r, which together with (<ref>) yields K_k+1≠ c(I_k+1). Condition <ref> follows.* Condition <ref>: Condition <ref> states that c(I_k^q)=c(I_k) ∪{c_k^q} for some c_k^q∈ K_k∖ RI_r, in particular c(I_k)⊆ c(I_k^q).Together with the fact that elements p and r have the same color (c(p)=c(r)), (<ref>) yieldsc(I_k+1^p) = c(I_k+1∖ I_k)∪ c(I_k^q). If we now apply condition <ref> for I_k^q and Claim <ref>, we see that c(I_k+1^p) = c(I_k+1) ∪{c_k+1^p}, where c_k+1^p = c_k^q. Note that K_k⊆ K_k+1, hence c_k+1^p∈ K_k+1∖ c(RI_r). Condition <ref> follows. * Condition <ref>: By definition I_k+1^p = I_k+1∖(I_k∪{r})∪ I_k^q ∪{p}. Because I_k+1 is a subset of the rainbow set RI_r, I_k+1 is itself rainbow. Together with c(I_k^q) = c(I_k) ∪{c_k^q}, where c_k^q∉ c(RI_r)⊇ c(I_k+1), this implies that the sets I_k+1∖ I_k and I_k^q are disjoint.Since r∈ I_k+1∖ I_k (Equation (<ref>)), c(p)=c(r)∈ c(I_k+1)∖ c(I_k) and c(I_k^q)∩ c(RI_r)=c(I_k) (conditions <ref> and <ref>), we have p,r∉ I_k^q and p,r∉ I_k. From p∉ I_k+1 follows p∉ I_k+1. Since r∈ I_k+1, we have |I_k+1^p| = |I_k+1∖ I_k| - |{r}| + |{p}| + |I_k^q| = |I_k+1∖ I_k| + |I_k| + 1, where the last equality uses the induction hypothesis for k. Claim <ref> then yields |I_k+1^p|=|I_k+1| + 1 as desired. * Condition <ref>: By definition ((<ref>)) p∈ I_k+1^p, so we only need to verify that (I_k+1^p∖{p})) =I_k+1. Let us compute. Using the fact that q∈ I_k^q from condition <ref> and (<ref>) we may rewrite (I_k+1^p∖{p}) as follows:(I_k+1^p∖{p})= ((I_k+1∖ (I_k∪{r}))∪ I_k^q)= ((I_k+1∖ (I_k∪{r}))∪ (I_k^q∖{q}∪{q}))= ((I_k+1∖ (I_k∪{r}))∪(I_k^q∖{q})∪{q}).Now we use condition <ref> for k ((I_k^q∖{q})=I_k). We obtain(I_k+1^p∖{p})= ((I_k+1∖ (I_k∪{r})∪(I_k)∪{q})= ((I_k+1∖{r})∪{q})=I_k+1,where the last equality follows from (<ref>). * Condition <ref>: By definition K_k+1=K_k∪ c(I_k+1). This implies C_K_k+1 = C_K_k∪ C_c(I_k+1). Hence RI_r ∩ C_K_k+1 = (RI_r∩ C_K_k)∪ (RI_r∩ C_c(I_k+1)). By the induction assumption RI_r∩ C_K_k=I_k. Because RI_r⊇ I_k+1 is rainbow, RI_r∩ C_c(I_k+1)=I_k+1. Claim <ref> then implies RI_r∩ C_K_k+1=I_k+1 as desired. Because K_k⊈c(RI_r) and K_k⊆ K_k+1, we have K_k+1⊈c(RI_r) as well.It follows that we may increase k and continue in the loop.In each step of the cycle we either terminate and output the desired subsequences, or we construct a sequence I_k+1 whose rank is strictly larger than the rank of I_k (Claim <ref>). Since the rank of I_k+1 is from above bounded by (M) it follows that the loop terminates after at most (M) iterations.Verifying that all other steps can be done in time polynomial in r, m, u and |S| andthat they are repeated only polynomial number of times is easy. § OPEN PROBLEMSRota basis conjecture <cit.> is a well known problem in matroid theory which has a close connection to our colorful matroidal Tverberg's theorem. Let us restate it so that the similarity is clearly visible.Let M be a matroid of rank m. Let S be a sequence of m^2 elements colored by m colors such that points of each color form a basis. Do there always exist m pairwise disjoint rainbow subsequences S_1,…, S_m of S with S_1= S_2=… =S_r=M?In its full generality the conjecture has only been verified for m=1,2,3 <cit.>. The conjecture is also known to be true in several special cases <cit.>. Proof of Theorem <ref> indicates the difficulties that appear if one tries to prove Rota's basis conjecture purely combinatorially. GMP+16[BGR15]anotherMatroidalTverberg I. Bárány, Kalai G., and Meshulam R. A Tverberg type theorem for matroids. ArXiv e-prints, 2015. Available online at <http://arxiv.org/abs/1607.01599>.[BMZ15]Blagojevic:Optimal_Tverberg P. V. M. Blagojević, B. Matschke, and G. M. Ziegler. Optimal bounds for the colored Tverberg problem. J. Eur. Math. Soc., 17(4):739–754, 2015.[Cha95]chan-1995 W. Chan. An exchange property of matroid. Discrete Mathematics, 146(1):299 – 302, 1995.[dL02]deLongueville:exposition M. de Longueville. Erratum to: “Notes on the topological Tverberg theorem”. Discrete Math., 247(1-3):271–297, 2002.[Fri15]Frick F. Frick. Counterexamples to the topological Tverberg conjecture. ArXiv e-prints, 2015. Available online at <http://arxiv.org/abs/1502.00947>.[GH06]Geelen-2006 J. Geelen and P. J. Humphries. Rota's basis conjecture for paving matroids. SIAM J. Discrete Math., 20(4):1042–1045, 2006.[Gly10]Glynn-2010 D. G. Glynn. The conjectures of Alon–Tarsi and Rota in dimension prime minus one. 24(2):394–399, 2010.[GMP+15]ourKuhnel X. Goaoc, I. Mabillard, P. Paták, Z. Patáková, M. Tancer, and U. Wagner. On generalized Heawood inequalities for manifolds: a Van Kampen-Flores-type nonembeddability result. Extended abstract in Proceedings of SoCG'15, 2015.[GMP+16]ourKuhnelArXiv X. Goaoc, I. Mabillard, P. Paták, Z. Patáková, M. Tancer, and U. Wagner. On generalized Heawood inequalities for manifolds: a Van Kampen-Flores-type nonembeddability result. preprint on arXiv:, 2016.[HR94]rota-1994 R. Huang and G.-C. Rota. On the relations of various conjectures on latin squares and straightening coefficients. Discrete Mathematics, 128(1):225 – 236, 1994.[MW14]Mabillard I. Mabillard and U. Wagner. Eliminating Tverberg points, I. An Analogue of the Whitney trick. Proceedings of the Thirtieth Annual Symposium on Computational Geometry (New York, NY, USA), SOCG'14, ACM, pages 171–180, 2014.[MW15]Uli:TverbergThree I. Mabillard and U. Wagner. Eliminating Higher-Multiplicity Intersections, III. Codimension 2. preprint on arXiv: 1601.00876, 2015.[Onn97]Onn-1997 S. Onn. A colorful determinantal identity, a conjecture of Rota, and Latin squares. 104(2):156–159, February 1997.[Öza87]Ozaydin:Tverberg M. Özaydin. Equivariant maps for the symmetric group. Unpublished manuscript, 1987. Available online at <http://digital.library.wisc.edu/> <1793/63829>.[Pat15]patak:thesis P. Paták. Using algebra in geometry. PhD thesis, Charles University, 2015. avaible online at <http://kam.mff.cuni.cz/ patak/thesis.pdf>.[Sar00]Sarkaria:Tverberg K. S. Sarkaria. Tverberg partitions and Borsuk-Ulam theorems. Pacific J. Math., 196(1):231–241, 2000.[Tve66]Tverberg H. Tverberg. A generalization of Radon's theorem. J. London Math. Soc., 41:123–128, 1966.
http://arxiv.org/abs/1702.08170v2
{ "authors": [ "Pavel Paták" ], "categories": [ "math.CO", "05B35, 51D20" ], "primary_category": "math.CO", "published": "20170227075829", "title": "Tverberg type theorems for matroids" }
todoTODO:theoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition corrolary[theorem]Corollary definitionDefinition exampleExample assumptionAssumption remarkRemark corollary[theorem]Corollary proof[1][] * Proof #1: IEEEtran On Fienup Methods for Regularized Phase Retrieval Edouard Pauwels, Amir Beck, Yonina C. Eldar, Fellow, IEEE, Shoham Sabach The work of Edouard Pauwels was partially supported by the Air Force Office of Scientific Research grant number FA9550-15-1-0500. The research of Amir Beck was partially supported by the Israel Science Foundation Grant 1821/16. Edouard Pauwels is with the Informatics department (IRIT), Université Toulouse 3 Paul Sabatier, Toulouse 31062, France (e-mail epauwels@irit.fr). A. Beck is with the department of Industrial Engineering, Technion–Israel Institute of Technology, Haifa, Israel 32000 (e-mail: becka@ie.technion.ac.il). Y. C. Eldar is with the department of Electrical Engineering, Technion–Israel Institute of Technology, Haifa, Israel 32000 (e-mail: yonina@ee.technion.ac.il). S. Sabach is with the department of Industrial Engineering, Technion–Israel Institute of Technology, Haifa, Israel 32000 (e-mail: ssabach@ie.technion.ac.il).Draft of December 30, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Alternating minimization, or Fienup methods, have a long history in phase retrieval. We provide new insights related to the empirical and theoretical analysis of these algorithms when used with Fourier measurements and combined with convex priors.In particular, we show that Fienup methods can be viewed as performing alternating minimization on a regularized nonconvex least-squares problem with respect to amplitude measurements. We then prove that under mild additional structural assumptions on the prior (semi-algebraicity), the sequence of signal estimates has a smooth convergent behaviour towards a critical point of the nonconvex regularized least-squares objective. Finally, we propose an extension to Fienup techniques, based on a projected gradient descent interpretation and acceleration using inertial terms. We demonstrate experimentally that this modification combined with an ℓ_1 prior constitutes a competitive approach for sparse phase retrieval.§ INTRODUCTION Phase retrieval is an old and fundamental problem in a variety of areas within engineering and physics <cit.>. Many applications of the phase retrieval problem involve estimation of a signal from the modulus of its Fourier measurements. This problem is ill posed in general, so that uniqueness and recovery typically require prior knowledge on the input, particularly in one-dimensional problems. Here we focus on the estimation of real sparse signals from their Fourier magnitude, a problem which has been treated in several recent works <cit.>.A longstanding line of algorithms to tackle the phase retrieval problem involve application of the alternating minimization method which alternate between the constraints in time and the Fourier magnitude constraints <cit.>. These methods were pioneered by the work of Gerchberg and Saxton and later extended by Fienup; see <cit.> for an optimization point of view on these techniques and a rich historical perspective. Alternating minimization approaches have also been recently applied to phase retrieval from random measurements <cit.>. The main advantage of this class of algorithms is their simplicity and scalability.A more recent approach to phase retrieval is to formulate the recovery as a smooth nonconvex least-squares estimation problem and use dedicated techniques to estimate the signal using continuous optimization algorithms that guarantee convergence to stationary points. The GESPAR algorithm <cit.> is an example of this approach which is based on the Gauss-Newton method coupled with sparsity priors. For phase retrieval with random measurements, gradient descent methods have been proposed and analyzed such as Wirtinger flow <cit.> and truncated amplitude flow <cit.>. Both treat least-squares objectives where Wirtinger flow measures the loss with respect to the squared-magnitude of the measurements while the amplitude flow approach performs a truncated gradient descent on an amplitude objective. Another line of work suggests the use of matrix lifting and semidefinite programming based relaxations <cit.>. These techniques are limited by the size of problems that can be tackled using available numerical solvers.Our main contribution is to propose a new look at alternating minimization algorithms for phase retrieval in the context of Fourier measurements and convex priors. We refer collectively to these techniques as Fienup methods. The use of Fourier measurements is less flexible than general measurements and is less suited for statistical analysis. On the other hand, the Fourier transform has very strong structure which allows for richer algorithmic constructions and analysis. As a first step we provide two new interpretations of Fienup algorithms. First we show that these techniques are naturally linked to a nonsmooth nonconvex least-squares problem with respect to an amplitude objective.Fienup approaches can then be understood as majorization-minimization methods for solving this problem. Second, we demonstrate that Fienup algorithms can be viewed as a projected gradient descent scheme to minimize a smooth convex objective function over a nonconvex constraint set. This observationallows to characterize the behaviour of the algorithm and develop extensions based on known ideas for accelerating gradient methods using inertial terms <cit.>. We then specialize these results to the case of ℓ_0 and ℓ_1 priors, leading to a new inertial gradient scheme, which we refer to as FISTAPH: FISTA for PHase retrieval.On the theoretical side, we show that if the convex prior is well structured (semi-algebraic or more generally representable), then the sequence of signal estimates produced by Fienup has a smooth convergence behaviour. Recall that, broadly speaking, an object is said to be semi-algebraic if it can be represented by systems of polynomial inequalities. The notion of smooth convergence is a very desirable property, even more in nonconvex settings where it is usually not possible to obtain global convergence estimates. The convergence analysis follows well established techniques from tame optimization <cit.>. These techniques build upon the Kurdyka-Łojasiewicz (KL) property which holds for many classes of functions <cit.>. We then provide numerical experiments based on synthetic problems to compare Fienup with ℓ_0 and ℓ_1 priors, GESPAR <cit.>, Wirtinger flow (or gradient) methods <cit.> with ℓ_0 and ℓ_1 priors and FISTAPH. Numerical results suggest that the latter combined with an ℓ_1 prior constitutes a very competitive alternative for sparse phase retrieval.The rest of the paper is organized as follows. Section <ref> introduces our notation and states the problem of interest more formally. We also introduce several mathematical definitions that are required for the rest of the paper and review the numerical algorithms that are used in subsequent sections. Section <ref> describes ourcharacterization of Fienup methods in the context of phase retrieval from Fourier measurements with convex priors. We detail the relation of Fienup with a nonsmooth nonconvex least-squares problem as well as its interpretation as projected gradient descent. Our main convergence result and our new FISTAPH algorithm are presented in Section <ref>. Simulation results are provided in Section <ref>.§ PROBLEM FORMULATION AND MATHEMATICAL BACKGROUND§.§ NotationThroughout the paper vectors are denoted by boldface letters. For a vector ∈^n, [i] is the i-th entry of , i=1,2,…, n and () is the support of , namely, the set { i = 1, 2, … n; [i] ≠ 0 }. Furthermore, _0 counts the number of nonzero entries of the vector : _0 = |()| and _p denotes the ℓ_p norm offor p ∈_+. The notations |·|, (·), (·) and ·̅ describe the modulus, real part, imaginary part and complex conjugate, respectively, defined over the field of complex numbers. If their argument is a vector, then they should be understood component-wise. Similarly, basic operations, e.g. powers, are taken component-wise when applied to vectors. For ∈^n and N ∈, (, N) ∈^N is the vector composed of the N first coefficients of the discrete Fourier transform of(obtained byzero padding if n <N). For simplicity, we use the shorthand notation () = (,n) to denote the standard discrete Fourier transform of ∈^n and ^-1 to denote its inverse. For a set S, δ_SS →∪{+∞} is the indicator function of S (0 if its argument is in S, +∞ otherwise) and P_S denotes the Euclidean orthogonal projection onto the set S.§.§ Phase RetrievalGiven _0 ∈^n, we consider the data acquisition process= |(_0)| + ,where ∈^n is an unknown vector of errors. In the rest of the paper, we actually assume thathas positive entries (it is always possible to set the potential negative entries ofto zero). The phase retrieval problem consists of producing an estimate ∈^n of _0 based solely on the knowledge ofgiven by (<ref>).As mentioned in the introduction, phase retrieval of one-dimensional vectors from Fourier measurements requires the use of prior knowledge. We focus on support and sparsity inducing priors. For J ⊆{ 1,2,…,n }, we define the set X_J = {∈^n; () ⊆ J }. The prior function that we use will be denoted by g ^n →∪{+∞}. We focus on the following priors (for a given J): * g ↦_0 + δ_X_J(), or ℓ_0-based nonconvex prior. * g ↦_1 + δ_X_J(), or ℓ_1-based convex prior.In the experimental section, we compare between these two classes of priors. The algorithmic derivations in this paper will be made under the assumption that g is proper and lower semicontinuous, and the main convergence result (c.f. Theorem <ref>) will require in addition convexity of g. In order to efficiently implement the proposed algorithm, we need to focus on priors for which the proximity operator <cit.> is easy to compute. We provide several examples of such priors in Section <ref>.In the rest of the paper, ∈^n_+ denotes modulus measurements which are assumed to be given, fixed and obtained through (<ref>). Given ∈^n_+, we define Z_ = {∈^n;|()| = } as the set of valuesthat could have produced(ignoring the noise). To estimate _0, we consider the regularized least-squares problemmin_∈^n, ∈ Z_1/2 - _2^2 + g(),where g encodes our prior knowledge. Our algorithmic approach consists of employing an alternating minimization method, or one of its variants, to solve the above formulation. §.§ Prior Algorithms for Phase Retrieval We briefly review several existing algorithms for phase retrieval that will be used in our experiments in Section <ref>.One approach to sparse phase retrieval is the GESPAR algorithm which is based on the damped Gauss-Newton method in conjunction with an ℓ_0 prior <cit.>. Damped Gauss-Newton allows to solve smooth, nonlinear least-squares problems. The work of <cit.> is based on the notion of Wirtinger derivatives to treat the same smooth least-squares problem as GESPAR. The notion of Wirtinger derivative is needed since the objective is not differentiable (holomorphic) as a function of complex variables (see <cit.> for details). In the case of real valued functions of real variables, the Wirtinger derivative reduces to a standard gradient (up to a constant multiplicative factor). An obvious extension of these types of methods is the use of proximal decomposition, or forward-backward methods which consist in alternating a gradient step on the smooth part of the objective with a proximal step on the nonsmooth part <cit.>. This is the approach that we use in the numerical experiments to treat phase retrieval with priors.Finally, we consider alternating minimization methods that are the main focus of this work. This approach consists of solving (<ref>) by applying the alternating minimization algorithm. The special structure of the problem allows to perform each partial minimization efficiently. In particular, the projection onto Z_ is easy, as described below in (<ref>). These types of methods are also referred to as Fienup algorithms. A deeper interpretation of this approach is given in Section <ref>.§.§ Tools from Convex and Nonsmooth Analysis Throughout the paper, our results will be based on tools from convex and nonsmooth analysis which we review here.The gradient of a differentiable function f is denoted by ∇ f. This concept admits extensions to nonsmooth analysis; the subgradient of a nonsmooth function g is denoted by ∂ g. For convex functions, subgradients correspond to tangent affine lower bounds. This definition no longer holds for nonconvex functions. In this case, the proper understanding of subgradients involves much more machinery which will not be discussed here. We only consider the notion of a Fréchet critical point which generalizes classical first order criticality for differentiable functions (see <cit.>). Let S ⊆^n be a closed set and f^n → be a lower semicontinuous function. We say that ∈ S is a Fréchet critical point of the problem min_∈ S f() ifc x →x̅ x ≠x̅ x ∈ S liminf f(x) - f(x̅)/x - x̅≥ 0. In other words, the negative variations of f in S around x̅ are negligible at the first order.We will also heavily use the notion of the proximity operator of a function. For a nonsmooth function g: ^n →∪{+∞}, the (potentially multivalued) proximity operator is denoted by _g and defined by _g() ≡_∈^n{1/2 - _2^2 + g()}. Note that when g is proper lower semicontinuous and convex, this operator is single valued.We next provide a few examples of such functions with their proximity operators; many more can be found, for example, in <cit.>. [Proximity operators] * Support prior: If C ⊆^n is a closed convex set, then _δ_C is the Euclidean projection onto C. This can be used for example to encode knowledge about the support of the signal _0 by choosing C = X_J for some J ⊆{ 1, 2, …, n }. In this case, the projection simply consists in setting the coefficients [i] to 0 for i ∉J. * Sparsity prior: If g is the ℓ_1 norm, then the proximal operator is the soft thresholding operator.[ The soft thresholding operator is given by 𝒯_α()_i =sgn(x_i)max{ |x_i|-α,0}. If g() = λ_1 for some λ>0, then _g() = 𝒯_λ().]This can be combined with support information prior by first setting the coefficients outside of the support to 0 and then applying the soft thresholding operator. * Change of basis: Suppose that D is an n × n' real matrix such that its columns form an orthonormal family, that is D^T D is the identity in ^n'. Suppose that g̃^n'→ is a lower semicontinuous convex function and let g()=g̃(D^T). In this case, we have _g( )=+ D( _g̃(D^T) - D^T) (see <cit.>). This allows to express priors in different orthonormal bases, such as wavelets for example. It is also worth mentioning that the proximity operator is efficiently computable for some nonconvex priors. For example, if g = δ_C where C = {∈^n; _0 ≤ k }, then the proximity operator is obtained by setting the n-k lowest coefficients (in absolute value) to 0. This can also be combined with support information. § FIENUP, MAJORIZATION-MINIMIZATION AND PROJECTED GRADIENTIn this section we expand on the alternating minimization approach to (<ref>) leading to the Fienup family of algorithms. For this section, the prior term g in (<ref>) is taken to be a general proper lower semicontinuous function. We begin by describing the algorithm and then provide two interpretations of it.§.§ Alternating Minimization AlgorithmThe alternating minimization algorithm applied to problem (<ref>) is explicitly written below.7cmAlternating Minimization (Fienup) Initialization. ^0∈^nGeneral Step. For k ∈, ^k+1 ∈_∈ Z_1/2^k - _2^2, ^k+1 ∈_∈^n1/2 - ^k+1_2^2 + g(). The main interest in this scheme is that both partial minimization steps in (<ref>) can be carried out efficiently whenever g is “proximable", meaning that its prox(or a member in its prox) is easily computed. First consider, in (<ref>), the partial minimization inwith ∈^n being arbitrary but fixed. This minimization amounts to computing P_Z_(^k), the orthogonalprojection of ^k onto Z_. For a given ∈^n, all the members in P_Z_() are of the form = ^-1(), where for j=1,2,…, n, we have (i=√(-1) in the equation below)[j] = [j]()[j]/|()[j]|, if|()[j]| ≠ 0,[j]e^iθ_j,for an arbitrary θ_jotherwise.Next, we treat the subproblem in (<ref>) of minimizing with respect towhere ∈^n is arbitrary but fixed. The partial minimization inis given by the expressionargmin_∈^n{1/2 - _2^2 + g()} = _g( Re()),where Re is the real part taken component-wise. We have used the definition of the proximity operator of g given in (<ref>). When this operator is easy to compute, each step of the algorithm can be carried out efficiently.The iterations of the alternating minimization method are summarized as follows:^k+1 ∈ P_Z_(_g( Re(^k))),^k+1 ∈_g( Re(P_Z_(^k))).We now consider several special cases of (<ref>): * If g=0, then _g is the identity and we recover the original algorithm from Fienup <cit.>, or alternating projection <cit.>.6cmFienup Initialization. ^0∈^n.General Step. For k ∈,^k+1 = Re(P_Z_(^k)). The convergence result given in Theorem <ref> also holds in this case since constant functions are convex and continuous.* If g() = λ_1 for some λ>0, then _g = 𝒯_λ, where 𝒯_λ is the soft thresholding operator (see footnote on page footnote_soft). We refer to the resulting algorithm as “AM L1". 6cmAM L1 Initialization. ^0∈^n, λ>0General Step. For k ∈,^k+1 = 𝒯_λ( Re(P_Z_(^k))).* If g=δ_C_K, where C_K is the set of all K-sparse vectors, C_K = {∈^n: _0 ≤ K}, then _g = P_C_K is the so-called hard thresholding operator. This operator outputs a vector which is all zeros except for the largest K components (in absolute values) of its input vector which are kept the same. The hard thresholding operator is multivalued and the resulting algorithm, which we term “AM L0" picks an arbitrary point in its range. 6cmAM L0 Initialization. ^0∈^n, K ∈General Step. For k ∈,^k+1∈ P_C_K( Re(P_Z_(^k))). §.§ Majorization-Minimization InterpretationIn this section, we focus on partial minimization in . We show that the value of this partial minimization leads to a least-squares objective. This allows us to interpret the Fienup algorithm as a majorization-minimization process on this least-squares function. For the rest of this section, for any ∈^n, we denote by () an arbitrary but fixed member of P_Z_(). §.§.§ Partial Minimization in The following lemma provides a connection between partial minimization inand the evaluation of a nonsmooth least-squares objective. For any ∈^n, we have min_∈ Z_1/2 - _2^2 = 1/2n|()| - _2^2.An optimal solution of the minimization problem is given by = ^-1() wherehas the form (<ref>). Now,min_∈ Z_1/2 - _2^2= 1/2 - ^-1()_2^2=1/2^-1(() -))_2^2=1/2n() - _2^2.Using the expression ofin (<ref>), we have for all j=1, 2, …, n,| ()[j] - [j]| = || ()[j]| - [j]|, if|()[j]| ≠ 0,[j], otherwise.Putting everything together,min_∈ Z_1/2 - _2^2=1/2n() - _2^2 = 1/2n|()| - _2^2,which completes the proof.A direct consequence of Lemma <ref> is the following corollary that connects between problem (<ref>) and a regularized nonlinear least-squares problem.The pair (,) is an optimal solution of problem (<ref>) if and only ifis an optimal solution ofmin{ F() ≡1/2n|()| - _2^2+g() },and = ^-1(), whereis of the form given in (<ref>).Note that in (<ref>), the least-squares objective is defined with respect to the amplitude |()| and not the magnitude-squared |()|^2. For random measurements, it has been shown in <cit.> that the amplitude objective leads to superior performance over the standard magnitude-squared approach. §.§.§ Fienup as Majorization-Minimization In order to understand further the connection with the Fienup algorithm, we define the following auxiliary function:h (,) ≡1/2 - ()_2^2 + g().Now, for any ∈^n, using Lemma <ref>, we have the following properties (recalling the definition of F in (<ref>)):h(, )= 1/2 - ()_2^2 + g() ≥1/2 - ()_2^2 + g() = F(),∀∈^n, h(, )= 1/2 - ()_2^2 + g() = F().In other words, using the convexity of g, we have that h(, ·) is a 1-strongly convex global upper bound on the objective F. Computing this upper bound amounts to performing partial minimization overin (<ref>). Minimizing the upper bound h(, ) incorresponds to partial minimization overin (<ref>). The upper bound is tight in the sense that we recover the value of the objective at the current point, h(,) = F(). Therefore the alternating minimization algorithm is actually a majorization-minimization method for the nonsmooth least-squares problemmin_∈^n1/2n|()| - _2^2 + g().The steps presented in (<ref>) can then be summarized as follows:^k+1 = _ h(^k, ) = _g( Re((^k))) =_g( Re(P_Z_(^k))),which is exactly the mapping given in (<ref>). §.§ Projected Gradient Descent Interpretation We now provide an additional interpretation of the alternating minimization algorithm as a projected gradient method for an optimization problem related to (<ref>) which consists of a smooth convex objective and a nonconvex constraint set. This interpretation is valid whenever g is assumed to be proper lower semicontinuous and convex.For any ∈^n and ∈^n, we can write (<ref>) as - _2^2 =- ()_2^2 + ()_2^2. To move from complex numbers to real numbers, we set _1 = () and _2=(). Defining a new constraint set Z̃_ = {(_1, _2) ∈^n ×^n: _1 + i _2 ∈ Z_}, problem (<ref>) can beequivalently rewritten in the formmin_∈^n, (_1, _2)∈Z̃_{1/2 - _1_2^2 + 1/2_2_2^2 + g()}. Minimizing first w.r.t. , (<ref>) reduces to the following minimization problem in _1,_2:min_(_1, _2) ∈Z̃_{ H(_1,_2) ≡ G(_1)+1/2_2^2 },whereG(_1) ≡min_∈^n{1/2_1 - _2^2 + g()}.The following result allows us to relate the gradient of H to the optimization primitives used in the alternating minimization method. Assume that g is proper, lower semicontinuous and convex. Then the function H is continuously differentiable, its gradient is 1-Lipschitz and can be expressed as ∇ H(_1,_2) =( _1 - _g(_1),_2 ). From Moreau <cit.>, we know that G is differentiable and ∇ G() =- _g () = _g^*(), where g^* is the conjugate function of g, which is convex. The computation of the gradient of H is then immediate. We can use the fact that proximity operators of convex functions are nonexpansive <cit.> to verify that ∇ H is 1-Lipschitz. Indeed, for any (_1, _2) and (_1, _2), we have ∇ H(_1,_2) - ∇ H(_1,_2)_2^2 =_g^*(_1) - _g^*(_1)_2^2 + _2 - _2_2^2≤_1 - _1_2^2 + _2 - _2_2^2 =(_1,_2) - (_1,_2)_2^2,completing the proof.Consider applying projected gradient descent to solve (<ref>). From Lemma <ref>, we can use a step size of magnitude 1. In this case, taking into account the form of the gradient given in (<ref>), we obtain that the general update step takes the form (_1^k+1,_2^k+1)=P_Z̃_((_1^k,_2^k) - ∇ H(_1^k,_2^k))= P_Z̃_((_1^k,_2^k) - (_1^k-_g(_1^k),_2^k))= P_Z̃_(_g(_1^k),0).We now go back to the complex domain by setting = _1 + i _2. Note that projecting (_1, _2) onto Z̃_ is equivalent to projectingonto Z_. With this notation, the iterations of projected gradient descent can be summarized by the following iteration mapping (on complex numbers):^k+1 = P_Z_(_g((^k))),which is exactly the same as (<ref>). Therefore, the Fienup algorithm is equivalent to projected gradient descent with unit stepsize applied to the formulation (<ref>). Note that from the point of view of nonsmooth analysis, problem (<ref>) is much better behaved than (<ref>). § CONSEQUENCES AND EXTENSIONS The interpretations of Section <ref> can be used to analyze the convergence of alternating minimization applied to problem (<ref>) and to offer extensions of the method.§.§ Convergence AnalysisOur main convergence result is given in the following theorem. Recall that a function is semi-algebraic if its graph can be defined by combining systems of polynomial equalities and inequalities (for example, the ℓ_1 norm is semi-algebraic). Assume that g is proper, lower semicontinuous, convex and semi-algebraic. Then the sequence {^k,^k}_k ∈ generated by the alternating minimization algorithm satisfies the following: (i) It holds that ∑_k≥ 0^k+1 - ^k_2 < + ∞ and the sequence {^k}_k ∈ converges to a point ^* ∈^n. (ii) For any accumulation point ^* of {^k}_k ∈, (^*, ^*) is a Fréchet critical point of problem (<ref>) and (_1^*,_2^*)=((^*),(^*)) is a Fréchet critical point of problem (<ref>). The proof is quite technical and is given in Appendix <ref>. The semi-algebraic assumption on g can be relaxed to representability in o-minimal structures over the real field, see <cit.>. Therefore, the proposed result actually applies to much more general regularizers. For example, using boundedness of the feasible set in (<ref>), the same result holds if g is analytic (see the dicussion in <cit.>). The arguments build upon a nonsmooth variant of the celebrated Kurdyka-Łojasiewicz (KL) property <cit.>. Note that direct application of the results of <cit.> to projected gradient descent or the results of <cit.> to majorization minimization is not possible here.The most important implication of Theorem <ref> is that the sequence of estimated signals converges smoothly to a point which satisfies certain optimality conditions related to problems (<ref>), and (<ref>). This is a departure from standard convergence results that are only able to guarantee that accumulation points of the generated sequence of iterates satisfy certain optimality conditions. It is important to underline that the result is global: it holds for any initialization of the algorithm and does not require any regularity assumption beyond semi-algebraicity and convexity of g. This is in contrast with local convergence results which are typical for alternating projection methods <cit.> that are applicable when the prior term g is an indicator function.§.§ Acceleration and Momentum TermA benefit of the interpretation of alternating minimization as a projected gradient method is that it allows to propose new variants inspired by known extensions for projected gradient algorithms. In this section we focus on the incorporation of an inertial term that results in an alternating minimization scheme that includes a momentum term. This line of research has a long history in optimization, starting with the development of the heavy-ball method <cit.> which inspired an optimal first order scheme for convex optimization developed by Nesterov <cit.>, and its extension to convex composite problems with the FISTA method <cit.>. Although this last technique was proposed and analyzed only in the context of convex optimization, we consider its application in our nonconvex constrained problem since it empirically provides interesting results. The resulting algorithm is referred to as FISTAPH, and is described as follows.7cmFISTAPH: FISTA for Phase retrieval Initialization. ^0∈ Z_ and α^k∈[0 , 1) for all k ∈. Set ^0 = ^0 and ^-1=^0.General Step. For k ∈, * ^k+1∈ P_Z_(_g((^k))). * ^k+1 = ^k + α^k( ^k - ^k-1). If ^m is the last produced iteration, then the output of the algorithm is =_g((^m)). A typical choice for the weight sequence is α^k = k-1/k+2.The question of the convergence of the iterates produced by this method in nonconvex settings is an interesting topic to explore in future research. We may also further consider monotone variants of similar types of methods, see e.g. <cit.>.In the numerical experiments we employ FISTAPH in the setting where g() = λ_1 for some λ>0. In this case, _g = 𝒯_λ with 𝒯_λ being the soft thresholding operator with parameter λ (see footnote on page footnote_soft).§ EXPERIMENTS AND NUMERICAL RESULTS In this section, we describe experiments and numerical results comparing the different algorithms introduced in Section <ref> on the task of phase retrieval.§.§ Experimental SetupGiven measurementsas in (<ref>), our problem consists of finding the corresponding _0. We focus on the setting in which _0 is known to be sparse. We varythe signal size n (with J = {1, 2, …, n/2}), the sparsity level K and the signal to noise ratio (SNR). In the following discussion, we will refer to a recovery method ℳ which can be seen as a black box which takes as input a vector of measurements ∈_+^n, support information J, sparsity level K, an initial estimateand outputs an estimate ∈^n with () ⊆ J and _0 ≤ K. One recovery experiment consists of the following: * Fix a recovery method ℳ, a signal length n, a support information set J = {1, 2, …, n/2}, a sparsity level K and an SNR. * Generate _0 ∈^n by the following procedure: * Choose K coordinates among J uniformly at random. * Set these coordinate values at random in [-4, -3] ∪ [3, 4]. * Set all other coordinates to be 0. * Generate the measurements ^2 = |(_0)|^2 +, whereis white Gaussian noise according to the chosen SNR. Set negative entries of ^2 to be 0 in order to take square root. * Call method ℳ 100 times with data (, J, K) and randomly generate initial estimates to get 100 candidate solutions {_it}_it = 1, 2, …, 100. * Compute the best estimate _best with best = _it = 1,2…,100{|(_it)| - _2^2}. * Compare _best and _0 (modulo Fourier invariances) with the following metric (sign is understood coordinatewise with (0) = 0): recovery(_best, _0)= 1, (_best) = (_0) 0,otherwise. This procedure was repeated 100 times. That is, for each method, signal length, sparsity level and SNR, we have 100 signal recovery experiments, each one associated with a support recovery status. We aggregate these results by considering the recovery probability (average of recovery(_best, _0)) and the median CPU usage for a single simulation (100 calls to the method with different initialization estimates). We use the same initialization for all methods by careful initialization of random seeds. All the experiments were performed on a desk station with two 3.2 GHz Quad Core Intel Xeon processors and 64GB of RAM.§.§ Implementation DetailsIn our numerical implementation, we used the following stopping criterion. * For alternating minimization and Wirtinger methods: the difference in successive objective value less than 10^-8. * For GESPAR: no swap improvement. * For FISTAPH: the norm of the gradient mapping less than 10^-8.The tuning of these criteria allows to balance accuracy and computational time to some extent.The ℓ_1 penalized problem includes a prior sparsity inducing term of the form g(·) = λ·_1. It is necessary to tune the λ parameter in order to obtain meaningful results. We considered the following strategies for different methods. * For alternating minimization, we set λ = 0.2 in all experiments. * For Wirtinger based method, λ is tuned a posteriori as a function of n and k. The experiment was conducted for λ = 1, 2.15, 4.64, 10, 21.5, 46.4, 100, 215, 464, and we report only the best experiment for each setting.An interesting feature of alternating minimization based methods is that in our experiments, recovery performance was very consistent for different values of λ in different settings. As a result, we chose a single value of λ for all experiments. The tuning of λ for Wirtinger based algorithms is practically much more difficult. In particular, we found that the best λ was a highly dependant function of the sparsity level K.Finally, we note that ℓ_0 based priors have the sparsity level of the estimate, K, as a parameter. On the other hand, ℓ_1 based priors will not necessarily produce K-sparse estimates. We therefore use truncation and keep the K largest entries in absolute value of the last iteration. §.§ Numerical ResultsThe performance in terms of support recovery are presented in Figure <ref> with the corresponding algorithm run time in Figure <ref>. Each point in these plots is an average over 100 simulations of the recovery process, each simulation consisting of 100 random initializations of the method considered. AM corresponds to Fienup methods with different priors, FISTA is the accelerated variant, and WIRT stands for Wirtinger.We make the following observations from the numerical results: * For alternating minimization, there is a consistent increase in recovery performance by switching from ℓ_0 to ℓ_1 based regularization priors. * The ℓ_1 prior degrades the performance of Wirtinger based methods compared to the ℓ_0 prior. * FISTAPH consistently provides the best performance and is significantly faster than its competitors. * Fienup with ℓ_0 prior leads to lower performance compared to GESPAR, which was already reported in <cit.>.As described in the experimental section, we added noise on the squared measurements rather than on the measurements themselves. This noise model is closer to the optimization model considered for GESPAR and Wirtinger flow than model (<ref>) which is related to problem (<ref>). We tried to change the noise model on a subset of experiments (additive noise on the measurements rather than squared measurements), however,the performance of the different methods was very similar. Therefore, we only report results related to squared-measurement noise model. § CONCLUSIONThe main theoretical contribution of this work is to provide a strong theoretical basis to the fact that Fienup-type methods, when used with Fourier transforms and convex priors, lead to smoothly converging sequences of estimates. This result holds under minimal assumptions and in particular, it holds globally, independently of the initialization point. Furthermore, we characterize the properties of the limiting point as Fréchet critical points of different optimization problems. These results shed light on important properties of one of the most well known algorithms used in the context of phase retrieval. Furthermore, based on an interpretation as a projected gradient method, we proposed a new variant of Fienup with the incorporation of a momentum term which we call FISTAPH.On the practical side, we demonstrated based on numerical simulations that FISTAPH with ℓ_1 regularization constitutes a very competitive alternative to other methods in the context of sparse phase retrieval. § PROOF OF THEOREM <REF> The proof involves many notions of nonsmooth analysis which can be found in <cit.>. Throughout the proof, we only consider subgradients of subdifferentially regular functions. Each subgradient can be interpreted as a Fréchet subgradient and the subgradient set valued mapping is closed. We adopt the notation of Section <ref>, letting = _1 + i _2 for two real vectors _1 and _2 and consider the constraint set Z̃_ = {(_1, _2) ∈^n ×^n; _1 + i _2 ∈ Z_}. We let K(, _1, _2) = 1/2 - _1_2^2 + 1/2_2_2^2 + g() be the objective function of problem (<ref>) which with this notation becomes min_∈^n, (_1, _2) ∈Z̃_ K(, _1, _2). We will denote by δ_Z̃_, the indicator function of the set Z̃_ (0 on the set and +∞ outside). We set K̃(, _1, _2) = K(, _1, _2) + δ_Z̃_(_1, _2) so that problem (<ref>) is equivalent to the (unconstrained) minimization of K̃. Proof of (i): Using <cit.>, the subgradient of this nonsmooth function is of the form ∂K̃(, _1, _2) = ( [ ∂_K̃(, _1, _2); ∂_(_1, _2)K̃(, _1, _2) ])= ( [ - _1 + ∂ g(); ( [ _1 -; _2 ]) + ∂δ_Z̃_ (_1, _2) ]). Partial minimization over iterations yields the following 0∈^k+1 - ^k_1 + ∂ g(^k+1) 0∈( [ ^k_1 - ^k;^k_2 ]) + ∂δ_Z̃_ (^k_1, ^k_2). Combining these, we have ( [0; ( [ ^k_1 - ^k+1;^k_2 ]) + ∂δ_Z̃_ (^k_1, ^k_2) ]) ⊂∂K̃(^k+1, ^k_1, ^k_2). Using (<ref>), ( [ 0; ^k - ^k+1; 0 ]) ∈∂K̃(^k+1, ^k_1, ^k_2). Finally, from strong convexity of K̃ with respect to its first argument, we have K̃(^k+1, ^k_1, ^k_2) + 1/2^k+1 - ^k^2_2≤K̃(^k, ^k_1, ^k_2)≤K̃(^k, ^k-1_1, ^k-1_2). Since g is semi-algebraic, K̃ is also semi-algebraic. Any semi-algebraic function satisfies the nonsmooth Kurdyka-Łojasievicz property <cit.>. We can now use the now well established recipe <cit.> <cit.> with the two conditions (<ref>) and (<ref>) to obtain that the sequence {^k+1 - ^k_2 }_k ∈ is summable. This proves statement (i) (convergence holds by Cauchy criterion). Proof of (ii): Using the fact that K̃ has compact sublevel sets, the sequence { (^k+1, _1^k, _2^k) }_k∈ is bounded and hence has a converging subsequence. We fix an accumulation point (^*, _1^*, _2^*) of the sequence (note that ^* is given by (i)). We remark that, thanks to (<ref>) and the fact that ^k+1 - ^k→ 0, any accumulation point of the sequence is a critical point of K̃. Furthermore, since ^k →^*, we have using (<ref>) that -( [ ^*_1 - prox_g(_1^*);^*_2 ]) ∈∂δ_Z̃_ (^*_1, ^*_2). This is actually the criticality condition for problem (<ref>) which proves statement (ii).
http://arxiv.org/abs/1702.08339v1
{ "authors": [ "Edouard Pauwels", "Amir Beck", "Yonina C. Eldar", "Shoham Sabach" ], "categories": [ "cs.IT", "math.IT", "math.OC" ], "primary_category": "cs.IT", "published": "20170227155707", "title": "On Fienup Methods for Regularized Phase Retrieval" }
http://arxiv.org/abs/1702.08453v1
{ "authors": [ "R. P. L. Azevedo", "C. J. A. P. Martins" ], "categories": [ "astro-ph.CO", "gr-qc", "hep-ph", "hep-th" ], "primary_category": "astro-ph.CO", "published": "20170227155117", "title": "Cosmic strings and other topological defects in nonscaling regimes" }
Magnon-photon coupling in antiferromagnets X. R. Wang December 30, 2023 ==========================================With a rapidly increasing number of devices connected to the internet, big data has been applied to various domains of human life. Nevertheless, it has also opened new venues for breaching users' privacy. Hence it is highly required to develop techniques that enable data owners to privatize their data while keeping it useful for intended applications. Existing methods, however, do not offer enough flexibility for controlling the utility-privacy trade-off and may incur unfavorable results when privacy requirements are high. To tackle these drawbacks, we propose a compressive-privacy based method, namely RUCA (Ratio Utility and Cost Analysis), which can not only maximize performance for a privacy-insensitive classification task but also minimize the ability of any classifier to infer private information from the data. Experimental results on Census and Human Activity Recognition data sets demonstrate that RUCA significantly outperforms existing privacy preserving data projection techniques for a wide range of privacy pricings. Compressive privacy, Subspace methods, Projection matrix, Principal/Discriminant component analysis0pt§ INTRODUCTION With our daily activities moving online, vast amounts of personal information are being collected, stored and shared across the internet, often without the data owner's knowledge. Even when the data owners trust data keepers such as Internet Service Providers and Statistics Bureaus to keep their personal information private, the data are often needed to be analyzed and released for Statistics, Commercial and Research purposes. This raises obvious concerns about the privacy of data contributors, as not only are the data vulnerable to inadvertent leakages, but also to malicious inference by other parties. Thus privacy-protection methods should be employed that allow data collectors and owners to control the types of information that can be inferred from their data.Consider a scenario where mobile users upload their sensor readings to the cloud, which in turn trains a classifier that allows smartphones to identify their users from sensor readings in the background as in <cit.>. This approach takes advantage of the vast storage and computation resources of the cloud. However, without proper processing the same data can be used to infer sensitive information about users, such as location, context and activities performed <cit.>. This is especially alarming given the fact that private information about users may not only be inferred by the cloud but possibly by other users as well through classifiers, which may include training samples in them <cit.>.A number of approaches based on data projection and/or noise addition have been proposed to preserve the statistics of the data for machine learning applications, while making privacy-sensitive information unavailable. Additive noise based randomization was proposed in <cit.>, but was shown to be susceptible to reconstruction attacks using spectral properties of random noise and data <cit.>. Liu et al. <cit.> proposed projection of the data to a lower dimensional space via a Random Projection Matrix. Later, a more suitable system was proposed in <cit.> for collaborative-learning, where the cloud trains a classifier with data from multiple users. Each user randomly generates a hidden Projection Matrix and adds variable levels of noise to projected samples before sending them to the cloud.In <cit.>, Kung presented a supervised version of Principle Component Analysis (PCA) called Discriminant Component Analysis (DCA) in order to project the data into a lower dimensional space that maximizes the discriminant power as in Fisher Discriminant Analysis <cit.>. The recent work of Diamantaras and Kung <cit.> inspired by this approach introduced another criterion called Multiclass Discriminant Ratio (MDR), and projects the data based on a pair of desirable and undesirable classification tasks. Dimension reduction through data projection removes both application-relevant and privacy-sensitive information from the data. DCA and MDR attempt to remove as little application-relevant information as possible by optimizing the projection subspace for the intended classification task. Yet they do not offer any flexibility for finding a favorable trade-off between utility and privacy. To address these problems, we propose a methodology called RUCA (Ratio Utility and Cost Analysis), which forms a bridge between DCA (utility driven projection) and MDR (privacy emphasized projection) and allows data owners to select a compromise between them. RUCA can be considered as a generalization of DCA and MDR, and it can also be extended to multiple privacy-sensitive classifications. Experimental results on Census and Human Activity Recognition data sets show that our methodology can provide better classification accuracies for the desired task while outperforming state-of-the-art privacy preserving data projection methods in terms of accuracies obtained from privacy-sensitive classifications.Our methodology for privacy preservation is described in Section <ref>, and it is formulated as the problem of maximizing separability of projected data for a desired classification task, while minimizing separability for undesirable classifications. We then present Generalized Eigenvalue Decomposition as a method for finding the optimal Projection Matrix that achieves this task. Our methods are tested on real data with possible utility and privacy classifications in Section <ref> and are compared with other projection based privacy protection methods. Finally, we conclude in Section <ref>.§ METHODOLOGY §.§ Problem StatementFor simplicity, we shall assume that there is a single privacy-sensitive classification on the data, though it is straightforward to generalize this to the case where there are multiple privacy-sensitive classifications. We assume that the data of our concern is fully represented by a set of N M-dimensional vectors {_1,_2,⋯,_N}. For the desired classification, which we name utility classification, we have a set of labels y_i associated with the vectors _i. For an undesirable classification, which we name privacy classification, we have a set of labels s_i associated with the vectors _i. There are two or more classes for each classification task, i.e. y_i ∈ 1,⋯,L, s_i ∈ 1,⋯,P, where L and P are the numbers of utility and privacy classes, respectively.Letbe an M × K projection matrix where K<M and _i=^T_i denote the projection of a vector _i to a K-dimensional subspace. Letdenote the M × N matrix whose columns correspond to the data entries _i anddenote the K × N matrix whose columns correspond to the projected entries _i. Given , our problem is to find a matrixsuch that given the projected data matrix =^T: * A classifier can achieve similar performance on the task of finding the labels {y_1,y_2,⋯,y_N}, compared to the case where the full data matrixis given. * Conversely, any classifier achieves poor performance, ideally as poor as random guessing, on the task of finding the set of labels {s_1,s_2,⋯,s_N}.§.§ Projection Method To achieve the task outlined above, we need to select a subspace such that separability between classes based on the utility labels y_i is maximized, while separability between classes based on privacy labels s_i is minimized. For utility driven dimension reduction, given the subspace dimension K, DCA <cit.> involves searching for the projection matrix _DCA∈^M × K:_DCA = :^T[+ρ]=arg max(^T )where (·) is the trace operator and ρ is a small regularization term added for numerical stability.is the center adjusted scatter matrix:= ^T=∑_i=1^N[_i-][_i-]^Twheredenotes the mean of the samples {_i}_i = 1^N.is divided into two additive parts:= +whereandare utility between-class and within-class scatter matrices, respectively. These are defined as= ∑_c=1^L N^u_c[-^u_c][-^u_c]^T = ∑_c=1^L ∑_y_i=c [_i-^u_c][_i-^u_c]^Twhere ^u_c is the mean andN^u_c is the number of samples in utility class c, respectively. Privacy between-class scatter matrixcan be defined similarly:= ∑_c=1^P N^p_c[-^p_c][-^p_c]^Twhere ^p_c is the mean andN^p_c is the number of samples in privacy class c, respectively. Optimal solution to the problem given in Equation <ref> remains the same whenis replaced withdue to the relationship given in Equation <ref>. Even though Equation <ref> applies more restrictive orthonormality constraints to the columns of the projection matrix , the subspace spanned by these columns constitutes an optimal solution for Multiclass Discriminant Analysis (MDA) criterion <cit.> (with an additional regularization term ρ):MDA=(^T )/(^T (+ρ) )where (·) is the determinant operator.In addition, an optimal solution to both of these problems can be derived from the first K principal generalized eigenvectors ofthe matrix pencil (,+ρ) <cit.>.Multiclass Discriminant Ratio (MDR) is a natural extension to MDA criterion for the case where there are two conflicting goals: To maximize separability for a utility classification problem and to minimize separability for a privacy classification problem <cit.>. It is defined as:MDR = (^T )/(^T (+ρ) ) Analogous to DCA and MDA, an optimal solution to MDR can be derived from the first K principal generalized eigenvectors of the matrix pencil (,+ ρ). Thus DCA and MDR, barring an orthonormality constraint on the columns of the projection matrix, are very similar and can both be solved via Generalized Eigenvalue Decomposition. We shall add additional parameters to DCA to obtain a compromise between DCA and MDR, which we will call Ratio Utility and Cost Analysis (RUCA):_RUCA = :^T[𝐒_RUCA+ρ]=arg max(^T ) where 𝐒_RUCA is a privacy-regularized scatter matrix:𝐒_RUCA=+ρ_pwhere ρ_p is a privacy parameter different from ρ. Note that when ρ_p is 0, this projection method becomes DCA and when ρ_p is very large, it becomes MDR as the term ρ_p dominates. By varying ρ_p, it is possible to establish a more favorable trade-off between utility and privacy than MDR. Additionally, RUCA can be generalized to multiple privacy classifications by including multiple between-class scatter matrices in the regularization:𝐒_RUCA=+∑_i^ρ_p_i _i Finally, an optimal solution to RUCA can be derived from the first K principal generalized eigenvectors of the matrix pencil (,𝐒_RUCA+ρ). In other words, columns of the projection matrixcorrespond to K largest eigenvalues λ_i satisfying the following relationship:_i = λ_i (𝐒_RUCA+ρ)_i In all the subspace optimization techniques described above, the left hand side of the characteristic equation remains the same as in Equation <ref>. Due to the fact that rank ofis at most L-1, there are at most L-1 non-zero eigenvalues associated with the generalized eigenvalue decompositions. In practice, another small regularization term ρ' may be added toto make it full rank, which will allow users to rank the columns ofin cases where K ≥ L. As columns corresponding to eigenvalues ranking L or lower don't normally contribute to our criteria, they are expected to have little contribution to the effectiveness of utility classification.§ EXPERIMENTAL RESULTS§.§ Data Sets We have tested our approach with multiple applications on Census (Adult) and Human Activity Recognition (HAR) <cit.> data sets, both of which are available at UCI Machine Learning Repository <cit.>. For the Census data set we used Income as the utility classification where we try to classify an individual as with high- or low-income, parallel with the original purpose of the data set. Privacy classifications were chosen as Marital Status and Gender, both of which were given as categorical features in the original data. We grouped `Married-civ-spouse', `Married-spouse-absent' and `Married-AF-spouse' into a single category called `Married'. `Divorced', `Separated' and `Widowed' were grouped into a single category called `Used to be Married'. We left the `Never Married' category as is. We first removed the samples with missing features in the data set and randomly sampled the rest of the training and testing sets (separately) in order to create two sets in which all privacy classes have equal number of samples, i.e. numbers of males and females were equal in our training and testing sets, and so were the number of samples categorized as 'Married', 'Never Married' and 'Used to be Married'. All categorical features were turned to numerical ones via binary encoding, as we determined it to yield higher classification accuracies than one-hot encoding with this data. After these operations we had 10086 samples remaining in the training set and 4962 samples remaining in the testing set with 29 features.In HAR data set, we had Activity and Identity as labels available to us, either of which can be utility or privacy based on the application. Therefore we tested for both cases. Activity had 6 types of labels: `Walking', `Walking Upstairs', `Walking Downstairs', `Sitting', `Standing' and `Laying'. Identity, on the other hand, had 21 types of labels based on the individuals who contributed to the data. Training and testing sets of the HAR data set consist of samples contributed by two disjoint sets of users. Therefore we extracted testing sets for Identity classification by randomly picking samples from the original training set. When Activity classification was chosen as utility, we tested Activity classification accuracy on the original testing set and Identity classification accuracy on the extracted testing set. The numbers of training, privacy testing and utility testing samples were 4011, 1890 and 2947, respectively, with 561 features. When Identity classification was chosen as utility, we tested both Identity and Activity classification accuracies on the same testing set, which was extracted from the original training set.The numbers of training and testing samples were 4026 and 1890 respectively with 561 features. As with the Census data, we kept the number of samples in all privacy classes equal in all sets. §.§ Results All our experiments were performed using RBF SVM on the original and projected data. Training and testing sets were separated as described in the last section before the experiments commenced. With the Census data set we performed 50 iterations at which we randomly picked 10% of the training samples. At each iteration and with each projection method, a 5-fold cross-validation grid search was performed to find the best parameters for training utility and privacy classifiers. With the HAR data set we performed 50 iterations at which we randomly picked 25% of the training samples. Once again, optimal parameters for SVM-RBF were determined via 5-fold cross-validation at each iteration. PCA and Random Projection were also included in our experiments for comprehensiveness. In order to compare RUCA's performance with other projection methods, we adopt a simple performance criterion:Performance = Acc_U+β(1-Acc_P)where Acc_U and Acc_P denote the utility and privacy classification accuracies, respectively, and β denotes the Privacy Pricing. Higher β indicates that higher emphasis is placed on privacy, while β=0 indicates that all the emphasis is placed on utility.Figure <ref> displays the utility-privacy trade-off curves obtained by progressively adding more components with each projection method. We stopped adding components as they started contributing predominantly to privacy classification. To obtain the results provided in Tables <ref>, <ref> and <ref>, we picked K=1, K=5 and K=20, respectively, because we had L=2, L=6 and L=21 for Income, Activity and Identity classification problems, respectively.The curves in Figure <ref>(a) demonstrate a trade-off between utility and privacy as the privacy parameter ρ_p is increased. Even RUCA with a low privacy parameter achieves higher privacy levels than possible with DCA. RUCA with ρ_p=1 outperforms PCA and DCA when β≥ 0.067, whereas RUCA with ρ_p=4 outperforms MDR and all remaining methods for all privacy pricings. Based on the trade-off curves in Figures <ref>(b) and <ref>(c), RUCA outperforms both DCA and MDR on HAR data for all privacy pricings. Furthermore, RUCA outperforms all other methods in (b) for all privacy pricings and all other methods in (c) when ρ_p ≥ 0.226. PCA and Random Projection, on the other hand, are seen to under-perform in all plots when the privacy pricing is high. By comparing the curves in (b) and (c), it becomes apparent that Identity classification when Activity is private is much harder than Activity classification when Identity is private on HAR data. Steepness of the drops in (b) suggests that more utility performance can be obtained by sacrificing relatively little privacy, which is not the case in (c).Results with K=1 for the Census data set are given in Table <ref>. Clearly, DCA alone reduces gender classification accuracy close to random guessing (50%) by sacrificing less than 1% (absolute) utility classification accuracy. Accordingly for this application, a nonzero privacy parameter ρ_p was only applied to the between-class scatter matrix of Marital Status classification and privacy parameter was kept at 0 for Gender classification. The table demonstrates a clear utility-privacy trade-off as ρ_p is increased, similar to Figure <ref>(a). RUCA outperforms DCA when β≥ 0.073 and all other methods for all privacy pricings. Results indicate that a small privacy parameter ρ_p provides significantly better privacy while sacrificing little utility classification performance, whereas with a large ρ_p it is possible to get better utility classification performance for the same privacy classification performance as other methods.Tables <ref> and <ref> show similar results for HAR data set when Activity classification and Identity classification are chosen as utility, respectively. Utility performance doesn't immediately drop, though privacy classification accuracies decrease as ρ_p is increased. Here RUCA outperforms all other methods for all privacy pricings, except for Random Projection as seen in Table <ref>. Although Random Projection provides better privacy for HAR data set when Activity classification is chosen as utility, it only outperforms RUCA when β≥ 5.621, i.e. when much higher emphasis is placed on privacy.§ CONCLUSIONWe have presented a novel subspace projection method that allows data offered by users in a collaborative learning environment to be used for the intended purpose, with minimal loss of private information. We formulated a new criterion called Ratio Utility and Cost Analysis, which combines utility driven DCAwith privacy emphasized MDR. Our method allows users to define multiple undesirable classifications on their data and achieve better utility for a given level of privacy. Using publicly available Census (Adult) and Human Activity Recognition data sets, we have demonstrated that our approach can provide better classification performance for the intended task for an equally low privacy classification performance when compared with state-of-the-art methods. Future work will include the extension of RUCA to privacy preserving non-linear projections, as well as an optimization method for the privacy parameters.IEEEbib *
http://arxiv.org/abs/1702.07976v1
{ "authors": [ "Mert Al", "Shibiao Wan", "Sun-Yuan Kung" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170226021405", "title": "Ratio Utility and Cost Analysis for Privacy Preserving Subspace Projection" }
plain decorations.markings decorations.pathmorphing decorations.pathreplacingsquare-12ptdefinitionDefinition theoremTheorem lemmaLemma corollaryCorollary propertyProperty specificationSpecification observationObservationlemma-repeat[1]Lemma <ref>linecounteralgorithmthplop algorithmAlgorithm constructionthplop constructionConstruction§.§ §.§[1] #1§.§.§§.§.§[1] #1 Another Look at the Implementation ofRead/write Registers in Crash-prone Asynchronous Message-Passing Systems (Extended Version) Damien Imbs^∘, Achour Mostéfaoui^†, Matthieu Perrin^▵, Michel Raynal^⋆,  ^∘LIF, Université Aix-Marseille, 13288Marseille, France ^†LINA, Université de Nantes, 44322 Nantes, France ^▵Computer science department, Technion, Haifa, 3200003, Israel ^⋆Institut Universitaire de France^IRISA, Université de Rennes, 35042 Rennes, France==============================================================================================================================================================================================================================================================================================================================================================================================“Yet another paper on” the implementation of read/write registers in crash-prone asynchronous message-passing systems! Yes..., but, differently from its predecessors, this paper looks for a communication abstraction which captures the essence of such an implementation in the same sense that total order broadcast can be associated with consensus, or message causal delivery can be associated with causal read/write registers.To this end, the paper introducesa new communication abstraction, named -broadcast ( standing for “Set Constrained Delivery”), which, instead of a single message, delivers to processes sets of messages (whose size can be arbitrary), such that the sequences of message sets delivered to any two processes satisfies some constraints.The paper then shows that: (a) -broadcast allows for a very simple implementation of a snapshot object (and consequently also of atomic read/write registers) in crash-prone asynchronous message-passing systems; (b) -broadcast can be built from snapshot objects (hence -broadcast and snapshot objects –or read/write registers– are “computationally equivalent”); (c) -broadcast can be built in message-passing systems where any minority of processes may crash (which is the weakest assumption on the number of possible process crashes needed to implement a read/write register).    Keywords: Asynchronous system, Atomicity, Communication abstraction, Linearizability, Message-passing system, Process crash, Read/write atomic register,Snapshot object.empty§ INTRODUCTION The “one-shot” terracotta tablets introduced and used at Sumer about 3030 BC <cit.>, and the “multi-shot” palimpsests used in the middle-age, can be considered as ancestors of the read/write register abstraction. Such an object provides its users with a write operation which defines a new value of the register, and a read operation which returns its value.When considering sequential computing, read/write registers are universal in the sense that they are assumed to allowsolving any problem that can be solved <cit.>.On the variety of read/write registersand their distributed implementationIn a shared read/write memory system, the registers are given for free.The situation is different in a message-passing system, where the computing entities (processes) communicate by sending and receiving messages transmitted through a communication network. Hence, in such a distributed context, a register is not given for free, but constitutes a communication abstraction which must be built by a distributed algorithm with the help of the local memories of the processes and the communication network. Several types of registers have been proposed. They differaccording to (a) their size (from binary registerswhich contain a single bit, to bounded and unbounded registers); (b) their behavior in the presence of concurrency (safe, regular, atomic <cit.>);(c) the number of processes which are allowed to readthem (Single-Reader -SR- vs Multi-Reader -MR- register); and (d) the number of processes which are allowed to write them (Single-Writer -SR- vs Multi-Writer -MR- register), which gives four possible combinations from SWSR to MWMR. There are algorithms building MWMR atomic (bounded and unbounded) registers from SWSR binary safe registers <cit.> (see <cit.> for surveys of such algorithms). As far as a read/write register is concerned, atomicity means that (a) each read or write operation appears as if it had been executed instantaneously at a single point of the time line, (b) this point appears between its start event and its end event, (c) no two operations appear at the same point of the time line, and (d) a read returns the value written by the closest preceding write operation (or the initial value of the register if there is no preceding write) <cit.>.Linearizability is atomicity extended to any object defined from a sequential specification on total operations <cit.>.In the following, we consider the terms atomicity and linearizability as synonyms.Hence, a sequence of read and write operations satisfying atomicity is said to be linearizable, and is called a linearization. The point of the time line at which an operation appears to have been executed is called its linearization point. Many distributed algorithms have been proposed, which build a read/write register on top of a message-passing system, be it failure-free or failure-prone. In the failure-prone case, the addressed failure models are the process crash failure model, and the Byzantine process failure model (see textbooks, e.g.,  <cit.>). When considering process crash failures (the one considered in this paper[For Byzantine failures, see for example <cit.>.]), the most famous of these algorithms was proposed by H. Attiya, A. Bar-Noy, and D. Dolev in <cit.>. This algorithm, usually called ABD according to the names of its authors, considers an n-process asynchronous system in which up to t < n/2 processes may crash. Ast < n/2 is an upper bound of the number of process crashes which can be tolerated (see <cit.>), this algorithm is t-resilient optimal. Its instances implementing SWMR or MWMR atomic read/write registers rely on (a) quorums <cit.>, and (b) a classical broadcast/reply communication pattern. This communication pattern is used twice in a read operation, andonce (twice) in a write operation for an SWMR (MWMR) atomicread/write register. Other algorithms –each with its own properties– implementing atomic read/write registers on top of crash-prone asynchronous message-passing systems can be found in the literature (<cit.> to cite a few; see also the analytic presentation given in <cit.>).From registers to snapshot objects The snapshotobject was introduced in <cit.>. A snapshot object is an array [1..m] of atomic read/write registers which provides the processes with two operations, denoted () and (). If the base registers are SWMR the snapshot is called SWMR snapshot (and we have then m=n). In this case, the invocation of (v) by a process p_i assigns v to [i],and the invocation of () by a process p_i returns the value of the full array as if the operation had been executed instantaneously. If the base registers are MWMR, the snapshot is called MWMR snapshot. The invocationof (r,v), where 1≤ r≤ m, by a process p_i assigns v to [r], and() is defined as before. Said another way, the operations () and () are atomic, i.e., in any execution of an SWMR (or MWMR) snapshot object, its operations() and () are linearizable.Implementations of bothSWMR andMWMR snapshot objects on top of read/write atomic registers have been proposed (e.g., <cit.>).The “hardness” to build snapshot objects in read/write systems and associated lower bounds are presented in thesurvey <cit.>. The best algorithm known to implementan SWMR snapshot requiresO(n log n) read/write on the base SWMR registersfor boththe () and () operations <cit.>. As far as MWMR snapshot objects are concerned, there are implementationswhere each operation has an O(n) cost[Snapshot objects built in read/writemodels enriched with operations such as Compare&Swap, or LL/SC, have also been considered, e.g.,<cit.>. Here we are interested in pure read/write models.].As far as the construction of an SWMR (or MWMR) snapshot object incrash-prone asynchronous message-passing systems where t<n/2 is concerned, it is possible to stack two constructions: firstan algorithm implementing SWMR (or MWMR) atomic read/write registers (such as ABD), and, on top of it, an algorithm implementing an SWMR (or MWMR) snapshot object. This stacking approach provides objects whose operation cost is O(n^2 log n) messages for SWMR snapshot, and O(n^2) messages for MWMR snapshot.An algorithm based on the same communication pattern as ABD, which builds an atomic SWMR snapshot object “directly” (i.e., without stacking algorithms) was recently presented in <cit.> (the aim of this algorithm is to perform better that the stacking approach in concurrency-free executions). Another look at the implementation of read/write registers and snapshot objects In sequential computing, there are “natural” pairings linking data structures and control structures. The most simple examples are the pair “array and for loop”, and the pair “tree and recursion”.When we look at the implementation of a causal read/write register <cit.> on top of a (crash-free or crash-prone) message-passing system, the causal message delivery broadcast abstraction <cit.> is the appropriate communication abstraction. Namely, given this abstraction for free,the algorithms implementing the read and write operations build on top of it, become very simple,need only a few lines, and are easy to understand and to prove correct. Of course, this is due to the fact that thecausalbroadcast abstraction captures and abstracts the causality relation needed to implement a causal read/writeregister. Similarly, total order broadcast is the communication abstraction associated with the consensus object <cit.>.This is summarized in Table <ref>. As already said, all the algorithms we know which implement atomic read/write registers, and (by stacking transitivity or directly) SWMR or MWMR snapshots objects, on top of crash-prone asynchronous message-passing systems, are based on a broadcast/reply pattern plus the use of intersecting quorums.Hence, the following question naturally arises: Is this approach the “only” way to implement a snapshot object (or an atomic register), or is there a specific communication abstraction which captures the essence and simplifies the implementation of snapshot objects (and atomic read/write registers)?Content of the paper Informatics in general (anddistributed computing in particular) is a science of abstractions, and this paper is distributed programming abstraction-oriented. It strives to address a “desired level of abstraction and generality – one that is broad enough to encompass interesting new situations yet specific enough to address the crucial issues” as expressed in <cit.>.More precisely, itanswers the previous question in a positive way.To this end, it presents a simple broadcast abstraction which matches –and therefore captures the essence of– snapshot objects (and atomic read/write registers). We call it Set-Constrained Delivery Broadcast (in short -broadcast).Given this communication abstraction, it is possible to quorum-free build snapshot objects, and vice versa. Hence,similarly to consensus and total order broadcast, -broadcast and snapshot objects have the same computational power (Table <ref>).The -broadcast communication abstraction allows a process to broadcast messages, and to deliver sets of messages (instead of single messages) in such a way that, if a process p_i delivers a message set[In the rest of the paper, the identifiers starting with”ms” denote message sets.] ms containing a message m, and later delivers a message set ms' containing a message m', then no process p_j can deliver first a set containing m' and later another set containing m.Let us notice that p_j is not prevented from delivering m and m' in the same set. The implementation of an instance of -broadcast costs O(n^2) messages. It follows thatthe costof a snapshot operation (or a read/write register operation) on top of a message-passing asynchronous system, where any minority of processes may crash, is also O(n^2) for both SWMR andMWMR snapshot objects (i.e., better than the stacking approach for SWMR snapshot objects). Additionally, be the snapshot objects that arebuilt SWMR or MWMR,their implementation differ only in the fact that their underling read/write registers are SWMR or MWMR. This provides us with a noteworthy genericity-related design simplicity. Of course, there is rarely something for free.The algorithms implementing the snapshot and write operations are simple because the -broadcast abstraction hides enough “implementation details” and provides consequently a high level abstraction (much higher than the simple broadcast used in ABD-like algorithms). Its main interest lies in its capture of the high level message communication abstraction that, despite asynchrony and process failures, allows simple message-passing implementations of shared memory objects such as snapshot objects and atomic read/write registers.RoadmapThe paper is composed of <ref> sections. Section <ref> presents the two base computation models concerned in this paper,(read/write and message-passing). Section <ref> presents the -broadcast communication abstraction.Then, Section <ref> presents a simple algorithm which implements a snapshot object on top of an asynchronous system enriched with -broadcast, in which any number of processes may crash.Section <ref> addresses the other direction, namely, it presents an algorithm building the -broadcast abstraction on top of an asynchronous system enriched with snapshot objects and where any number of processes may crash. Section <ref> concludes the paper. A noteworthy feature of the algorithms that are presented lies in their simplicity, which is a first class property. Appendix <ref> describes an implementation of -broadcast suited toasynchronous message-passing systems where any minority of processes may crash.Hence, being implementable in the weakest[ From the point of view of the maximal number of process crashes that can be tolerated, assuming failures are independent.]message-passing system model in which a read/write register can be built, -broadcast is not “yet another oracle” which makes things simpler to understand but cannot be implemented.Appendix <ref> presents simplified -based algorithms which build atomic andsequentially consistent read/write registers. § BASIC COMPUTATION MODELSThis section presents two basic computation models. In both cases, the processmodelis the same. §.§ ProcessesThe computing model is composed of a set of n asynchronous sequential processes, denoted p_1, ..., p_n. “Asynchronous” means that each process proceeds at its own speed, which can be arbitrary and always remainsunknown to the other processes.A process may halt prematurely (crash failure), but it executesits local algorithm correctly until its possible crash. The model parameter t denotes the maximal number of processes that may crash in arun. A process that crashes in a run is said to be faulty. Otherwise,it is non-faulty. Hence a faulty process behaves as a non-faulty process until it crashes.§.§ Basic crash-prone asynchronous shared memory modelAtomic read/write register The notion of an atomic read/write register has been formalized in <cit.>. An MWMR atomicregister (say ) is a concurrent object which provides eachprocesswith an operation denoted . write(),and an operation denoted . read(). When a process invokes . write(v) it defines v as being the new value of .An MWMR atomic registeris defined by the following set of properties. * Liveness. An invocation of an operation by a non-faulty process terminates. * Consistency (safety).All the operations invoked by the processes, except possibly –for each faulty process– the last operation it invoked, appear as if they have been executed sequentially and this sequence of operations is such that:* each read returns the value written by the closest write that precedes it (or the initial value ofif there is no preceding write),* if an operation op1 terminates before an operation op2 starts, then op1 appears before op2 in the sequence. This set of properties states that, from an external observer point of view, the read/write register appears as if it is accessed sequentiallyby the processes, and this sequence (a) respects the real-time access order, and (b) belongs to the sequential specification of a register. Notation The previous computation model is denoted _n,t[∅] (Crash Asynchronous Read-Write). This basic read/write model is also called wait-free read/write model.The symbol ∅ means there is no specific constraint on t, which is equivalent to t<n, as it is always assumed that not all processes crash.Snapshot object This object was defined in the introduction. As we have seen, snapshot objects can be built in _n,t[∅]. As we have seen there are two types of snapshot objects. SWMR snapshot objects (whose base registers are SWMR), and MWMR snapshot objects (whose base registers are MWMR).In the following we consider MWMR snapshot objects, but the algorithms canbe trivially adapted to work withSWMR snapshot objects._n,t[∅] enriched with snapshot objects is denoted_n,t[. As a snapshot object can be built in_n,t[∅] this model has the same computational power as _n,t[∅]. It only offers a higher abstraction level.§.§ Basic crash-prone asynchronous message-passing model Communication Each pair of processes communicate by sending and receiving messages through two uni-directional channels, one in each direction. Hence, the communication network is a complete network: any process p_i can directly send a message to any process p_j (including itself). A process p_i invokes the operation “send type(m) to p_j” to send to p_j the message m, whose type is type.The operation “receive type() from p_j” allows p_i to receive from p_j a message whose type is type.Each channel is reliable (no loss, corruption, nor creation of messages), not necessarily first-in/first-out, and asynchronous (while the transit timeof each message is finite, there is no upper bound on message transit times).Let us notice that, due to process and message asynchrony, no process can know if another process crashed or is only very slow.Notation and necessary and sufficient condition This computation model is denoted _n,t[∅] (Crash Asynchronous Message-Passing).The constraint (t<n/2) is a necessary and sufficient condition to implementan atomic read/write register in _n,t[∅] <cit.>.Hence, the model_n,t[∅] whose runs are constrained byt<n/2is denoted _n,t[t<n/2].§ A BROADCAST ABSTRACTION: SET-CONSTRAINED MESSAGE DELIVERYDefinition The set-constrained broadcast abstraction (-broadcast) provides the processes with two operations, denoted () and(). The first operation takes a message to broadcast as inputparameter.The second one returns a non-empty set of messages to the process that invoked it.Using a classical terminology, when a process invokes (m), we say that it “scd-broadcasts a message m”. Similarly, when it invokes() and obtains a set of messages ms, we say that it “scd-delivers a set of messages ms”. By a slight abuse of language, we also say that a process “scd-delivers amessage m” when itdeliversa messagem∈ ms. -broadcast is defined by the following set of properties, where we assume –without loss of generality– that all the messages that are scd-broadcast are different. * Validity. If a process scd-delivers a set containing a message m, then m was scd-broadcast by some process. * Integrity. A message is scd-delivered at most once by each process. * MS-Ordering. If a process p_i scd-delivers first a message m belonging to a set ms_i and later a message m' belonging to a set ms_i'≠ ms_i, thenno processscd-delivers first the message m' in some scd-delivered set ms'_j and later the message m in some scd-delivered set ms_j≠ ms'_j.* Termination-1. If a non-faulty process scd-broadcasts a message m, it terminates its scd-broadcast invocation andscd-delivers a message set containing m. * Termination-2. If a non-faulty process scd-delivers a message m, everynon-faulty process scd-delivers a message set containing m. Termination-1 and Termination-2 are classical liveness properties (found for example in Uniform Reliable Broadcast).The other ones are safety properties. Validity and Integrity are classical communication-related properties. The firststates that there is neithermessage creation nor message corruption, while the second states that there is no message duplication. The MS-Ordering property is new, and characterizes -broadcast. It states that the contents of the sets of messagesscd-delivered at any two processes are not totally independent: the sequence of sets scd-delivered at a process p_i and the sequence of sets scd-delivered at a process p_j must be mutually consistent in the sense that a process p_i cannot scd-deliver first m∈ ms_i and later m'∈ ms_i'≠ ms_i, while another processp_jscd-delivers first m'∈ ms_j' and later m∈ ms_j≠ ms_j'.Let us nevertheless observe that if p_iscd-delivers first m∈ ms_i and later m'∈ ms_i', p_j may scd-deliver m and m' in the same set of messages.An example Let m_1, m_2, m_3,m_4, m_5, m_6, m_7, m_8, ... be messages that have been scd-broadcast by different processes. The followingscd-deliveries of message sets by p_1, p_2 and p_3 respect the definition of -broadcast: * at p_1: {m_1,m_2}, {m_3,m_4,m_5}, {m_6}, {m_7,m_8}. * at p_2: {m_1}, {m_3,m_2}, {m_6,m_4,m_5}, {m_7}, {m_8}. * at p_3: {m_3,m_1,m_2}, {m_6,m_4,m_5}, {m_7}, {m_8}.Differently, due to the scd-deliveries of the sets including m_2 and m_3,the followingscd-deliveries by p_1 and p_2 do not satisfythe MS-broadcast property: * at p_1: {m_1,m_2}, {m_3,m_4,m_5}, ... * at p_2: {m_1,m_3}, {m_2}, ...A containment property Let ms_i^ℓ be theℓ-th message setscd-delivered by p_i. Hence, at some time, p_i scd-delivered the sequence of message sets ms_i^1, ⋯, ms_i^x. Let _i^x= ms_i^1∪⋯∪ ms_i^x. The followingproperty follows directly from theMS-Ordering and Termination-2 properties: * Containment. ∀ i,j,x,y: (_i^x ⊆_j^y) ∨ (_j^y⊆_i^x).Remark 1: Weakening SCD-broadcast If the messages in a message set are delivered one at a time, and the MS-Ordering property is suppressed, -broadcast boils down to Reliable Broadcast.Remark 2: On the partial order created by themessage sets The MS-Orderingand Integrity properties establish a partial order on the set of all the messages, defined as follows. Let ↦_i be the local message delivery order at a process p_i defined as follows: m ↦_i m' if p_i scd-delivers the set containing m before the set containing m'.As no message is scd-delivered twice, it is easy to see that↦_i is apartial order (locally know by p_i). The reader can check that there is a total order (which remains unknown to the processes)on the whole set of messages, that complies with thepartial order ∪_1≤ i≤ n↦_i.This is where -broadcast can be seen as a weakening of total order broadcast. Let →_msg 𝑑𝑒𝑓=∪_1≤ i ≤ n→_i. Due to the MS-Orderingproperty, thisrelationis partial order on the set of all themessagesthat have been scd-broadcast. It isglobal in the sense that it involves all the local relations →_i, and consequently cannot be fullyknown by each process taken individually.§ FROM -BROADCAST TO AN MWMR SNAPSHOT OBJECTLet_n,t[] denote _n,t[∅] enriched with the-broadcast abstraction. Hence, this abstraction is given for free.This section presents and proves correct a simple algorithm building anMWMR snapshot objecton top of _n,t[].The same algorithm with very fewsimple modifications can be used to build SWMR or MWMR atomic registers in_n,t[] (see Appendix <ref>).§.§ Building an MWMR snapshot objecton top of _n,t[]Let [1..m] denote the MWMR snapshot objectthat is built.Local representation of at a process p_i At each register p_i,[1..m] is represented by three local variables reg_i[1..m] (data part), plustsa_i[1..m] and done_i (control part). * _i is a Boolean variable.* _i[1..m] contains the current value of[1..m], as known by p_i. * tsa_i[1..m] is an array of timestamps associated with the values stored in _i[1..m]. A timestamp is a pair made of a local clock value and a process identity.Its initial value is ⟨ 0,-⟩. The fields associated with tsa_i[r] are denoted⟨ tsa_i[r].date ,tsa_i[r].proc ⟩.Timestamp-based order relation We consider the classical lexicographical total order relation on timestamps, denoted<_ts. Let ts1= ⟨ h1,i1⟩ and ts2= ⟨ h2,i2⟩. We havets1ts2 𝑑𝑒𝑓= (h1<h2) ∨ ((h1=h2)∧ (i1<i2)).Algorithm <ref>: snapshot operation (Lines <ref>-<ref>) When p_i invokes .(), it first sets _i to , and invokes(i).() is a synchronization message, whose aim is to entail the refreshment of the value of _i[1..m] (lines <ref>-<ref>) which occurs before the setting of _i to(line <ref>). When this happens, p_i returns the value of its local variable _i[1..m] and terminates its snapshot invocation. Algorithm <ref>: write operation (Lines <ref>-<ref>) When a process p_i wants to assign a value v to [r],it invokes .(r,v). This operation is made up of two parts. First p_i executes a re-synchronization (lines <ref>-<ref>, exactlyas in the snapshot operation) whose side effect is here to provide p_i with an up-to-date value of _i[r].date.In the second part, p_i associates the timestamp ⟨_i[r].date+1,i⟩ with v, and invokes(r, v, ⟨_i[r].date+1,i⟩) (line <ref>).In addition to informing the other processes on its write of [r], this message () acts as a re-synchronization message, exactly as a message (i).When this synchronization terminates (i.e., when the Boolean _i is set to ), p_i returns from the write operation (line <ref>).Algorithm <ref>: scd-delivery of a set of messages When p_i scd-delivers a message set, namely,{ (r_j_1, v_j_1, ⟨ date_j_1, j_1⟩), ⋯,(r_j_x, v_j_x, ⟨ date_j_x, j_x⟩),(j_x+1), ⋯, (j_y) }it first looks ifthere are messages (). If it is the case, for each register [r] for which there aremessages (r,-,-) (line <ref>), p_i computes the maximal timestamp carried by these messages (line <ref>), andupdates accordingly its local representation of [r] (lines <ref>-<ref>). Finally, if p_i is the sender of one of these messages (() or ()),done_i is set to , which terminates p_i's re-synchronization (line <ref>). Message cost An invocation of() involves one invocation of (), and aninvocation of () involves two such invocations.It is shown in Appendix <ref> that, in a message-passing system, () costs O(n^2) protocol messages. It follows that, in such systems,the message cost of both operations of a snapshot objectis O(n^2).(This remains true for SWMR snapshot objects, see Appendix <ref>.)§.§ Proof of Algorithm <ref> As they are implicitly used in theproofs that follow, let us recall the properties of the -broadcast abstraction. The non-faulty processes scd-deliver the same messages (exactly one each), and each of them was scd-broadcast. As a faulty process behaves correctly until it crashes, it scd-delivers a subset of the messagesscd-delivered by the non-faulty processes. Without loss of generality, we assume that there is an initial write operation issued by a non-faulty process. Moreover, ifa process crashes in a snapshot operation, its snapshot is not considered;ifa process crashes in a write operation, its write is considered only if the message () it sent at line <ref> is scd-delivered to at least one non-faulty process (and by the Termination-2 property, at least to all non-faulty processes).Let us notice thata message() scd-broadcast by a process p_i does not modify the local variables ofthe other processes. § PROOF OF LEMMAS FOR THEOREM <REF>If a non-faulty process invokes an operation, it returns from its invocation. Let p_i be a non-faulty process that invokes a read or writeoperation. By the Termination-1 property of -broadcast, iteventually receives a message set containing the message () or () it sends atline <ref>, <ref> or <ref>. As all the statements associated with the scd-delivery of a message set (lines <ref>-<ref>) terminate,it follows thatthe synchronization Boolean _i is eventually set to . Consequently,p_i returns from the invocation of its operation.Extension of the relationThe relationis extended to a partial order on arrays of timestamps, denoted ≤_, defined as follows:tsa1[1..m] ≤_ tsa2[1..m] 𝑑𝑒𝑓=∀ r: (1[r] = 2[r] 1[r]tsa2[r]). Moreover,1[1..m] <_2[1..m] 𝑑𝑒𝑓= (1[1..m] ≤_tsa2[1..m])∧ (1[1..m] ≠ tsa2[1..m]).Definition Let _i be the set of the array valuestaken by ts_i[1..m] at line <ref> (end of the processing ofa message set by process p_i). Let =∪_1≤ i≤ n_i.The order ≤_ is total on .Let us first observe that, for any i,all values in _i are totally ordered (this comes from ts_i[1..m] whose entries can only increase,lines <ref> and <ref>). Hence, let 1[1..m] be an array value of _i, and2[1..m] an array value of _j, where i≠ j.Let us assume, by contradiction, that ¬ (1 ≤_2) and ¬ (2 ≤_1).As ¬ (1 ≤_2), there is a registers r such that 2[r] <1[r]. According to lines <ref> and <ref>, there is a message (r,-,1[r]) received by p_i when _i = 1 and not received by p_j when _j = 2 (because 2[r] <1[r]). Similarly, there is a message (r',-, 2[r']) received by p_j when _j = 2 and not received by p_i when _i = 1. This situation contradicts the MS-Ordering property, from which we conclude that either 1 ≤_2 or 2 ≤_1.Definitions Let us associate a timestamp ts((r, v)) with each write operation as follows. Let p_i be the invoking process; ts((r, v)) is the timestamp of v as defined by p_i at line <ref>, i.e., ⟨_i[r].date+1, i⟩.Let 1 and 2 be any two operations. The relation ≺ on the whole set of operations is defined as follows:1 ≺2 if 1 terminated before 2 started.It is easy to see that ≺ is a real-time-compliant partial orderon all the operations. No two distinct write operations on the same register 1(r, v) and 2(r, w) have the same timestamp, and (1(r, v) ≺2(r, w)) ⇒ (ts(1)ts(2)).Let ⟨ date1,i⟩ and ⟨ date2,j⟩ be the timestamp of 1(r, v) and 2(r, w), respectively.If i≠ j, 1(r, v) and 2(r, w) have been produced by different processes, and their timestamp differ at least in their process identity.So, let us consider that the operations have been issued by the same process p_i, with 1(r, v) first.As 1(r, v) precedes 2(r, w), p_i first invoked (r, v,⟨ date1,i⟩) (line <ref>) and later (r, w,⟨ date2,i⟩). It follows that these -broadcast invocations are separated by a local reset of the Boolean done_i at line <ref>.Moreover, before the reset of _i due to the scd-delivery of the message {⋯,(r, v,⟨ date1,i⟩),⋯}, we have _i[r].date_i≥ date1 (lines <ref>-<ref>).Hence, we have _i[r].date≥ date1 before the reset of _i (line <ref>).Then, due to the “+1” at line <ref>, (r, w,⟨ date2,i⟩) is such that date2>date1, which concludes the proof of the first part of the lemma. Let us now consider that1(r, v) ≺2(r, w). If 1(r, v) and 2(r, w) have been produced by the same process we have date1 < date2 from the previous reasoning. So let us assume that they have been produced by different processes p_i and p_j. Beforeterminating 1(r, v) (when the Boolean _i is set at line <ref>), p_ireceived a message set ms1_i containing the message (r, v,⟨ date1,i⟩).When p_j executes2(r, w), it firstinvokes(j) at line <ref>.Because 1(r, v) terminated before2(r, w) started,this message (j) cannot belong to ms1_i.Due to Integrity and Termination-2 of -broadcast, p_j eventually scd-delivers exactly onemessage set ms1_j containing (r, v,⟨ date1,i⟩). Moreover, it also scd-deliversexactly onemessage set ms2_j containing its own message (j). On the the other side, p_i scd-delivers exactly onemessage set ms2_i containing the message (j). It follows from the MS-Ordering property that, ifms2_j≠ ms1_j,p_j cannot scd-deliver ms2_jbeforems1_j. Then, whatever the case (ms1_j=ms2_j orms1_j is scd-delivered at p_j before ms2_j), it follows from the fact that the messages () are processed (lines <ref>-<ref>) before the messages (j) (line <ref>), that we have _j[r]≥⟨ date1,i⟩ when _j is set to .It then follows from line <ref> that date2>date1, which concludes the proof of the lemma.Associating timestamp arrays with operations Let us associate a timestamp array ()[1..m] with each operation () as follows.* Case ()=(). Let p_i be the invoking process; () is the value of _i[1..m] when p_i returns from the snapshot operation (line <ref>). * Case ()=(r,v).Let _({A}), where A is a set of array values, denote the smallest array value of A according to <_. Let ()𝑑𝑒𝑓=_({[1..m] ∈ ts() ≤_ts[r]}).Hence,() is the first [1..m] of , that reportsthe operation ()=(r,v). Letand ' be two distinct operations such that ≺'. We have () ≤_('). Moreover, if ' is a write operation, we have () <_(').Let p_i and p_j be the processes that performedand', respectively. Let _j be the (j) message sent by p_j (at line <ref> or <ref>) during the execution of '.Let 𝑡𝑒𝑟𝑚__𝑖 be the value of _i[1..m] whenterminates (line <ref> or <ref>), and 𝑠𝑦𝑛𝑐__𝑗 the value of _j[1..m] when 𝑑𝑜𝑛𝑒_𝑗 becomes true for the first time after p_j sent _j (line <ref> or <ref>). Let us notice that 𝑡𝑒𝑟𝑚__𝑖 and𝑠𝑦𝑛𝑐__𝑗 are elements of the set .According to lines <ref> and <ref>, for all r, _i[r] is the largest timestamp carried by a message (r,v,-) received by p_i in a message set beforeterminates.Let m be a message such that there is a set sm scd-delivered by p_i before it terminated . As p_j sent _j after p_i terminated, p_i did not receive any set containing _j before it terminated . By the properties Termination-2 and MS-Ordering, p_j received message m in the same set as _j or in a message set sm' received before the set containing _j. Therefore, we have 𝑡𝑒𝑟𝑚__𝑖≤_𝑠𝑦𝑛𝑐__𝑗.Ifis a snapshot operation, then () = 𝑡𝑒𝑟𝑚__𝑖. Otherwise, ()=(r,v). As p_i has to wait until it processes a set of messages includingits () message (and executes line <ref>),we have ts() 𝑡𝑒𝑟𝑚__𝑖[r].Finally, due to the fact that 𝑡𝑒𝑟𝑚__𝑖∈ andLemma <ref>, we have()≤_𝑡𝑒𝑟𝑚__𝑖.If ' is a snapshot operation, then 𝑠𝑦𝑛𝑐__𝑗 = (')(line <ref>). Otherwise, ()=(r,v) and thanks to the +1 in line <ref>, 𝑠𝑦𝑛𝑐__𝑗[r] is strictly smaller than (')[r] which, due to Lemma <ref>, implies 𝑠𝑦𝑛𝑐__𝑗 <_(').It follows that, in all cases, we have () ≤_𝑡𝑒𝑟𝑚__𝑖≤_𝑠𝑦𝑛𝑐__𝑗≤_(') and if ' is a write operation, we have () ≤_𝑡𝑒𝑟𝑚__𝑖≤_𝑠𝑦𝑛𝑐__𝑗 <_('), which concludes the proof of the lemma.The previous lemmas allow the operations to be linearized (i.e., totally ordered in an order compliant with both the sequential specification of a register, and their real-time occurrence order) according to a total order extension of the reflexive and transitive closure of the →_lin relation defined thereafter.Let , ' be two operations. We define the →_lin relation by →_lin' if one of the following properties holds: *≺',*tsa() <_tsa tsa('),*tsa() = tsa('), op is a write operation and ' is a snapshot operation,*tsa() = tsa('),and ' are two write operations on the same register and ts()ts('),The snapshot object built by Algorithm <ref>is linearizable. We recall the definition of the →_lin relation: →_lin' if one of the following properties holds: *≺',*tsa() <_tsa tsa('),*tsa() = tsa('), op is a write operation and ' is a snapshot operation,*tsa() = tsa('),and ' are two write operations on the same register and ts()ts('),We define the →_lin^⋆ relation as the reflexive and transitive closure of the →_lin relation.Let us prove that the →_lin^⋆ relation is a partial order on all operations. Transitivity and reflexivity are given by construction.Let us prove antisymmetry. Suppose there are _0, _2, ..., _m such that _0 = _m and _i →_lin_i+1 for all i<m.By Lemma <ref>, for all i<m, we have tsa(_i) ≤_tsa tsa(_i+1), and tsa(_m) = tsa(_0), so the timestamp array of all operations are the same.Moreover, if _i is a snapshot operation, then _i ≺_(i+1) % m is the only possible case (% stands for “modulo”) , and by Lemma <ref> again, _(i+1) % m is a snapshot operation. Therefore, only two cases are possible. * Let us suppose that all the _i are snapshot operations and for all i, _i ≺_(i+1) % m. As ≺ is a partial order relation, it is antisymmetric, so all the _i are the same operation. * Otherwise, all the _i are write operations. By Lemma <ref>, for all _i ⊀_(i+1) % m.The operations _i and _i+1% m are ordered by the fourth point, so they are write operations on the same register and ts(_i)ts(_i+1% m). By antisymmetry of therelation, all the _i have the same timestamp, so by Lemma <ref>, they are the same operation, which proves antisymmetry.Let ≤_lin be a total order extension of →_lin^⋆. Relation ≤_lin is real-time compliant because →_lin^⋆ contains ≺.Let us consider a snapshot operationand a register r such that tsa()[r] = ⟨ date1, i ⟩. According to line <ref>, it is associated to the value v that is returned by 1() for r, and comes from a (r,v,⟨ date1, i ⟩) message sent by a write operation _r = (r,v).By definition of tsa(_r), we have tsa(_r) ≤_tsa tsa() (Lemma <ref>),and therefore _r ≤_lin. Moreover, for any different write operation '_r on r, by Lemma <ref>, ts('_r) ≠ ts(_r). If ts('_r)ts(_r), then '_r ≤_lin_r. Otherwise, tsa() <_tsa tsa('_r), and (due to the first item of the definition of→_lin) we have≤_lin'_r. In both cases, the value written by _r is the last value written on r before , according to ≤_lin. Algorithm <ref> builds an MWMRsnapshot object in the system model_n,t[].The proof follows from Lemmas <ref>-<ref>.§ FROMSWMR SNAPSHOT TO -BROADCASTThis section presents an algorithm which builds the -broadcast abstraction in_n,t[. This algorithm completes the computational equivalence ofsnapshot and -broadcast. (SWMR snapshot objects can be easily implemented in _n,t[ by instantiating Algorithm <ref>with m=n, and only allowing p_ito invoke .(r,-).)§.§ Algorithm <ref> Shared objects The shared memory is composed of two SWMR snapshot objects (as defined above). Let ϵ denote the empty sequence. * [1..n]: is a snapshot object, initialized to [∅, ⋯,∅], such that[i] contains the messages scd-broadcast by p_i.* [1..n]: is a snapshot object,initialized to [ϵ, ⋯,ϵ], such that [i] contains the sequence of the sets of messages scd-delivered by p_i. The notation ⊕ is used for the concatenation of a message set at the end of a sequence of message sets. Local objects Each process p_imanages the following local objects.* _i is alocal copy of the snapshot object . * _i is alocal copy of the snapshot object .*is an auxiliary variable whose aim is to contain the next message set that p_i has to scd-deliver. The function (set_seq) returns the set of all the messages contained inset_seq.Description of Algorithm <ref>When a process p_i invokes(m), it adds m to _i[i] and [i] to inform all the processeson the scd-broadcast of m. It then invokes the internal procedure () from which it exits once it has a set containing m (line <ref>). A background task T ensures that all messages will be scd-delivered (line <ref>). This task invokes repeatedly the internal procedure ().As, locally, both the application process and the underlying task T can invoke(), which accesses the local variables of p_i, those variablesare protected by a local fair mutual exclusion algorithm providing the operations () and() (lines <ref> and <ref>). The procedure () first invokes the internal procedure (), whose aim is to allow p_i to scd-deliver sets of messages which have been scd-broadcast and not yet locally scd-delivered. To this end, () works as follows (lines <ref>-<ref>). Process p_i first obtains a snapshot of , and saves it in _i(line <ref>). This allows p_i to know which message sets have been scd-delivered by all the processes; p_i then enters a “while” loop to scd-deliver as many message sets as possible according to what was scd-delivered by the other processes.For each process p_j that has scd-delivered a message set set containing messages not yet scd-delivered by p_i(predicate of line <ref>),p_i builds a set _i containing the messages in set that it has notyet scd-delivered(line <ref>),and locally scd-delivers it (line <ref>). This local scd-delivery needs to update accordingly both _i[i] (local update) and [i] (global update). When it returns from (), p_i strives to scd-deliver messages not yet scd-delivered by the other processes. To this end, it first obtains a snapshot of , which it stores in _i(line <ref>). If there are messages that can be scd-delivered (computation of _i at line <ref>, and predicate at line <ref>), p_i scd-delivers them and updates_i[i] and[i] (lines <ref>-<ref>) accordingly.§.§ Proof of Algorithm <ref> If a process scd-delivers a set containing a message m, some process invoked (m). The proof follows directly from the text of the algorithm, which copies messages fromto , without creating new messages. No process scd-deliversthe same message twice. Let us first observe that, due to lines <ref> and <ref>, all messages that are scd-delivered at a process p_i have beenadded to _i[i]. The proof then follows directly from (a) this observation,(b) the fact that (due to the local mutual exclusion at each process) _i[i] is updated consistently, and (c) lines <ref> and <ref>, which state that a message already scd-delivered (i.e., a message belonging to_i[i]) cannot be added to _i.Any invocation of() by a non-faulty process p_i terminates.The proof consists in showing that the internal procedure () terminates. As themutex algorithm is assumed to be fair, process p_i cannot block forever at line <ref>. Hence, p_i invokes theinternalprocedure (). It thenissues first a snapshot invocation onand stores thevalue itobtainsthe value of _i. There is consequently a finite number of message sets in _i. Hence,the “while” of lines <ref>-<ref> can be executed only a finite number of times, and it follows thatany invocation of () by a non-faulty process terminates. The same reasoning (replacingby ) shows that process p_i cannot block forever when it executes thelines <ref>-<ref> of the procedure (). If a non-faulty process scd-broadcasts a message m,it scd-delivers a message set containing m.Let p_i be a non-faulty process thatscd-broadcasts a message m. As it is non-faulty, p_i adds m to[i] and then invokes()(line <ref>). As m∈, it is eventually added toif not yet scd-delivered (line <ref>), and scd-delivered at line <ref>, which concludes the proof of the lemma.If a non-faulty process scd-delivers a message m, everynon-faulty process scd-delivers a message set containing m.Let us assume that a process scd-delivers a message set containing a message m. It follows that theprocess that invoked (m)added m to(otherwise no process could scd-deliver m).Let p_i be a correct process.It invokes () infinitely often(line <ref>). Hence, there is a first execution of () such that sent_i contains m (line <ref>). If then follows from line <ref>that m will be added to _i (if not yet scd-delivered). If follows that p_i will scd-deliver a set of messages containing m at line <ref>. Let p_i be a process that scd-delivers a set ms_i containing a message m and later scd-delivers a set ms'_i containing a message m'.No process p_j scd-delivers first a set ms'_j containing m' and later a set ms_j containing m.Let us consider two messages m and m'. Due tototal order property on theoperations onthesnapshot object , it is possible to order the write operations of m and m' into . Without loss of generality, let us assume that m is added tobefore m'. We show that no process scd-delivers m' before m.[Let us notice that it is possible that a process scd-delivers them in two different message sets, while another process scd-delivers them in the same set (which does not contradicts the lemma).]Let us consider a process p_i that scd-delivers the message m'. There are two cases. * p_i scd-delivers the message m'at line <ref>.Hence, p_i obtained m' from the snapshot object(lines <ref>-<ref>). As m was written inbefore m', we conclude thatcontains m. It then follows from line <ref> that, if p_i has not scd-delivered m before (i.e., m is not in _i[i]), then p_i scd-delivers it in the same set as m'.* p_i scd-delivers the message m'at line <ref>. Due to the predicate used at line <ref> to build a set of message to scd-deliver, this means that there is a process p_j that has previously scd-delivered a set of messages containing m'.Moreover, let us observe that the first time themessage m' is copied fromto some [x] occurs at line <ref>. As m was written inbefore m', the corresponding process p_x cannot see m' and not m.It follows from the previous item that p_x has scd-delivered m in the samemessage set (as the one including m'), or in a previousmessage set. It then follows from the predicate of line <ref> that p_i cannot scd-delivers m' before m.To summarize, the scd-deliveries of message sets in the procedure ()cannot violate the MS-Ordering property, which is established at lines <ref>-<ref>. Algorithm <ref> implements the -Broadcast abstraction in the system model _n,t[t<n].The proof follows from Lemma <ref> (Validity),Lemma <ref> (Integrity), Lemmas <ref> and <ref> (Termination-1),Lemma <ref> (Termination-2), and Lemma <ref> (MS-Ordering). § CONCLUSIONThis paper has introduced a new communication abstraction (-broadcast) providing processes with an abstraction level between reliable broadcast and total order broadcast (which captures the necessary and sufficient constraint on message deliveries which allowsconsensus objects tobe implemented in asynchronous crash-prone message-passing systems).More precisely, -broadcastcaptures theabstraction level which is“necessary and sufficient” to implement read/write registers and snapshot objects on top of asynchronous message-passing systems prone to process failures. “Sufficient” means here that no other notion or object[The notion of intersecting quorums is neitherprovided by the abstraction level offered by -broadcast, nor required –in addition to -broadcast– to implement registers or snapshot objects. Actually,it ishiddenand majority quorums appear only in the implementation of -broadcast.] is needed to build a register or a snapshot objectat the abstraction level provided by -broadcast, while “necessary” means that the objects that are built (registers and snapshot objects) are the weakest from a shared memory computational point of view.As announced in the Introduction, an algorithm implementing -broadcast in an asynchronous message-passing system where any minority of processes may crash is described in Appendix <ref>.This algorithmrequires O(n^2) protocol messages per invocation of ().It follows that the -broadcast-based MWMR snapshot algorithmpresented in the paper requires O(n^2) protocol messages per invocation of () or () operation. This is the best read/write snapshot algorithm we know in the context of asynchronous message-passing systems. roman § ACKNOWLEDGMENTSThis work has been partially supported by theFranco-German DFG-ANR Project 40300781 DISCMAT (devoted to connections between mathematics and distributed computing), and the French ANR project DESCARTES(devoted to layered and modular structures in distributed computing).The authors want to thank Faith Ellen for fruitful exchanges on shared memory snapshot. 99 AADGMS93Afek Y., Attiya H., Dolev D., Gafni E., Merritt M. and Shavit N.,Atomic snapshots of shared memory.Journal of the ACM, 40(4):873-890 (1993) ANBHK95 Ahamad M.,Neiger G.,Burns J.E.,Hutto P.W., and Kohli P.Causal memory: definitions, implementation and programming.Distributed Computing, 9:37-49 (1995) A94 Anderson J.,Multi-writer composite registers. Distributed Computing, 7(4):175-195 (1994)A00 Attiya H., Efficient and robust sharing of memory in message-passing systems. Journal of Algorithms, 34:109-127 (2000)ABD95 Attiya H., Bar-Noy A. and Dolev D., Sharing memory robustly inmessage passing systems.Journal of the ACM, 42(1):121-132 (1995) AR98 Attiya H. and Rachmann O., Atomic snapshots in O(nlog n) operations.SIAM Journal of Computing,27(2):319-340 (1998) AW94 Attiya H. andWelch J.L.,Sequential consistency versus linearizability. ACM Transactions on Computer Systems, 12(2):91-12 (1994)AW04 Attiya H. and Welch J.L., Distributed computing: fundamentals, simulationsand advancedtopics,(2dEdition),Wiley-Interscience, 414pages (2004) BJ87 Birman K.andJoseph T. Reliable communication in the presence of failures. ACM Transactions on Computer Systems, 5(1):47–76 (1987) CT96 Chandra T. and Toueg S.,Unreliable failure detectors for reliable distributed systems.Journal of the ACM, 43(2):225-267 (1996) DFRR16 Delporte-GalletC., Fauconnier H., Rajsbaum S., and Raynal M.,Implementing snapshot objects on top ofcrash-prone asynchronous message-passing systems. Proc.16th Int'l Conference on Algorithms and Architectures for Parallel Processing (ICA3PP'16), Springer LNCS 10048,pp. 341–355 (2016) DGLV10Dutta P.,Guerraoui R., Levy R., andVukolic M.,Fastaccess to distributed atomic memory. SIAM Journal ofComputing, 39(8):3752-3783 (2010) E05 Ellen F., How hard is it to take a snapshot?Proc. 31th Conference on Current Trends inTheory andPracticeof ComputerScience (SOFSEM'05),Springer 3381, pp. 27-35 (2005) EFR07 Ellen F., Fatourou P., andRuppert E.,Time lower bounds for implementations of multi-writer snapshots. Journal of the ACM, 54(6), 30 pages (2007)FLP85 Fischer M.J., Lynch N.A. and Paterson M.S., Impossibility of distributed consensus with one faulty process.Journal of the ACM, 32(2):374-382 (1985)FM03 Fischer M.J. and Merritt M., Appraising two decades of distributed computing theory research. Distributed Computing,16(2-3):239-247 (2003) HNS16 Hadjistasi Th.,Nicolaou N., andSchwarzmann A.A.,Oh-RAM! One and a half round read/write atomic memory. Brief announcement.Proc. 35th ACM Symposium onPrinciples ofDistributed Computing (PODC'16), ACM Press, pp. 353-355 (2016) HW90HerlihyM. P.and Wing J. M.,Linearizability: a correctness conditionfor concurrentobjects. ACM Transactionson Programming Languages and Systems,12(3):463-492 (1990)IR12 Imbs D. and Raynal M.,Help whenneeded, but no more:efficient read/write partial snapshot.Journal of Parallel and Distributed Computing, 72(1):1-12 (2012)ICMT94 Inoue I.,Chen W., Masuzawa T. and Tokura N., Linear time snapshots using multi-writer multi-reader registers. Proc.8th Int'l Workshop on Distributed Algorithms (WDAG'94),Springer LNCS 857, pp. 130-140 (1994) J05 Jayanti P.,An optimal multiwriter snapshot algorithm.Proc. 37th ACM Symposium on Theory of Computing (STOC'05),ACM Press, pp. 723-732 (2005) JFC08 Jiménez E.,Fernández A.,and Cholvi V., A parameterized algorithm that implements sequential, causal, and cache memoryconsistencies.Journal of Systems and Software,81(1):120-131 (2008) K56 Kramer S. N.,History Begins at Sumer: Thirty-Nine Firsts in Man's Recorded History. University of Pennsylvania Press, 416 pages, ISBN 978-0-8122-1276-1 (1956)L79 Lamport L.,How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Transactions on Computers, C28(9):690–691 (1979) L86 Lamport L., On interprocess communication, Part I: basic formalism.Distributed Computing, 1(2):77-85 (1986) L96 Lynch N. A., Distributed algorithms. Morgan Kaufmann Pub., San Francisco (CA), 872 pages, ISBN 1-55860-384-4 (1996)M86MisraJ., Axioms for memory accessin asynchronous hardware systems.ACMTransactionson ProgrammingLanguages andSystems, 8(1):142-153 (1986)MPRJ17Mostéfaoui A., Pétrolia M.,Raynal M., and Jard Cl.,Atomic read/write memory in signature-freeByzantine asynchronous message-passing systems. SpringerTheory ofComputing Systems (2017) DOI: 10.1007/s00224-016-9699-8 MR16-podc Mostéfaoui A. and Raynal M., Two-bit messages are sufficient to implementatomic read/writeregisters in crash-pronesystems.Proc. 35th ACM Symposium onPrinciples ofDistributed Computing (PODC'16), ACM Press, pp. 381-390 (2016)PMMJ16 Perrin M., Mostéfaoui A., Pétrolia M., and Jard Cl.,On composition and implementation of sequential consistency. Proc. 30th Int'lSymposium onDistributed Computing (DISC'16), Springer LNCS 9888, pp. 284-297 (2017) R02 Raynal M.,Sequential consistency as lazy linearizability. Brief announcement. Proc. 14thACM Symposium on Parallel Algorithms and Architectures (SPAA'02), ACM press, pp. 151-152, (2002)R10 Raynal M., Communication and agreement abstractions for fault-tolerantasynchronous distributed systems.Morgan & Claypool Publishers, 251 pages, ISBN 978-1-60845-293-4 (2010) R13 Raynal M., Distributed algorithms for message-passing systems.Springer, 510 pages, ISBN 978-3-642-38122-5(2013) R13-1 Raynal M., Concurrent programming: algorithms, principles andfoundations. Springer,515 pages, ISBN 978-3-642-32026-2 (2013)RST91 Raynal M., Schiper A., and Toueg S.,The causal ordering abstraction and a simple way to implement it. Information Processing Letters, 39:343-351 (1991) R08 Ruppert E.,Implementing shared registers in asynchronous message-passing systems.Springer Encyclopedia of Algorithms, pp. 400-403 (2008) T36 Turing A.M., On computable numbers with an application to the Entscheidungsproblem. Proc. of the London Mathematical Society, 42:230-265 (1936) V12 Vukolic M.,Quorum systems, with applications to storage and consensus.Morgan & Claypool Publishers, 132 pages, ISBN 978-1-60845-683-3 (2012) § AN IMPLEMENTATION OF -BROADCASTIN MESSAGE-PASSING SYSTEMS This sectionshows that the -broadcast communication abstraction is not an oracle-like object which allows us to extend our understanding of computing, but cannot be implemented. It describes animplementation of-broadcastin _n,t[t<n/2], which is the weakest assumption on process failures that allows aread/write register to be built on top of an asynchronousmessage-passing system <cit.> (see footnote <ref>).To simplify the presentation, and without loss of generality, we consider that the communication channels are FIFO. The associated communication operations are denoted () and (). §.§ Algorithm <ref>Local variables at a process p_i Each process p_i manages the following local variables.* : buffer where are stored the messages not yet scd-delivered in a message set.* : next set of messages to be scd-delivered. * sn_i: local sequence number (initialized to 0), which measures the local progress of p_i. * clock_i[1..n]: array of sequence numbers. clock_i[j] is the greatest sequence number x such that theapplication messageidentified by ⟨ x,j⟩ was in a message set scd-delivered by p_i.Operation () When p_i invokes (m), where m is an application message, it sends the message (m,i,sn_i,i,sn_i) to itself (this simplifies the writing of the algorithm), and waits until it has no more message from itself pending in _i, which means it has scd-delivered a set containing m.A protocol message () (line <ref>) is made up of five fields: the associated application message m, and two pairs, each made up of a sequence number and a process identity.The first pair (sd,sn) is the identity of the application message, while the second one (f,) is the local progress () of the forwarder process 𝑝_𝑓 when it forwards this protocol message.Reception of(m,sd,,f,) When a process p_i receives such a protocol message, it first invokes (m,sd,,f,) to participate in the reliable broadcast of this message (line <ref>), and then invokes () to see if a message set can be scd-delivered (line <ref>).Procedure () This procedure can be seen as an enrichment (with the fields f and ) of the reliable broadcast implemented by the messages (m,sd,,-,-). Considering such a message (m,sd,,f,),m was scd-broadcast by p_sd at its local time , and relayed by the forwarding processp_f at its local time .If ≤clock_i[sd], p_i has alreadyscd-delivered a message set containing m (see lines <ref> and <ref>).If > clock_i[sd], there are two cases. * The message m is not in . In this case, p_i creates a quadruplet msg, andadds it to (lines <ref>-<ref>). This quadruplet⟨ msg.m, msg.sd,msg.f,msg.cl⟩ is such that* the field msg.m contains theapplication message m, * the field msg.sd contains the id of the sender of this application message, * the fieldmsg.sn contains the local date associated with m by its sender,* the field msg.cl is an array of size n, such that msg.cl[x] = sequence number (initially +∞) associated with m by p_x when it broadcast (msg.m,-,-,-,-). This last field is crucial in the scd-delivery of a message set containing m.After the quadruplet msg has been built, p_i first adds it to(line <ref>), and invokes (line <ref>)  (m,sd,,i,sn_i) to implement the reliable broadcast of m identified by ⟨ sd,⟩. Finally, p_i records its progress by increasing sn_i (line <ref>). * There is a quadrupletmsg in associated with m, i.e., msg=⟨ m, sd,-, -⟩∈ (predicate of line <ref>). In this case,p_iassignsto msg.cl[f] (line <ref>), thereby indicating that m was known and forwarded by 𝑝_𝑓 at its local time .Procedure () When it executes (), p_i first computesthe setof the quadruplets msg containingapplication messages m which have been seen by a majority of processes (line <ref>).From p_i's point of view, a message has been seen by a process 𝑝_𝑓 if msg.cl[f] has been set to a finite value (line <ref>).If a majority of processes received first a message (m',-,-,-,-) and later another message (m,-,-,-,-), it might be that some process p_j scd-delivered a set containing m' before scd-delivering a set containing m.Therefore, p_i must avoidscd-delivering a set containing m before scd-delivering a set containing m'. This is done at line <ref>, where p_i withdraws the quadruplet msg corresponding to m if it has not enough information to deliver m' (i.e. the corresponding msg' is not in _i) or it does not have the proof that the situation cannothappen, i.e. no majority of processes saw the message corresponding to msg before the message corresponding to msg'. Ifis not empty after it has been purged (lines <ref>-<ref>), p_i computes a message set to scd-deliver.This set ms contains all the application messages in the quadruplets of(line <ref>).These quadruplets are withdrawn from(line <ref>).Moreover, before this scd-delivery, p_i needs to updates clock_i[x] for all the entries such that x=msg.sd where msg∈ (line <ref>).This update is needed to ensure that the future uses of the predicate of line <ref> are correct. §.§ Proof of Algorithm <ref> If a processscd-delivers a set containingm, some process invoked (m).If process p_i scd-delivers a set containing a message m, it has previously added intoa quadruplet msg such that msg.m=m (line <ref>), for which it has fifo-received at least n/2 (m,-,-,-,-) messages. The first of these messages ever sent was sent after a process invoked (m). No process scd-deliversthe same message twice. After a message m scd-broadcast by p_sd with a sequence numberis scd-delivered by p_i, clock_i[sd] ≥ thanks to line <ref> and there is no msg∈_i with msg.sd=sd and msg.sn=, as it was removed on line <ref>.Thanks to line <ref>, no such msg' will be added again in . Asis defined as a subset ofon line <ref>, m will never be scd-delivered by p_i again.If a message (m, sd, ,i, sn_i) is broadcast by a non-faulty process p_i, then each non-faulty process p_j broadcasts a single message (m, sd, ,j, sn_j). First, we prove that p_j broadcasts a message (m, sd,,j, sn_j). As p_i is non-faulty, p_j will eventually receive the message sent by p_i. At that time, if > clock_j[sd], after the condition on line <ref> and whatever its result, _i contains a value msg with msg.sd = sd and msg. =. That msg was inserted at line <ref> (possibly after the reception of a different message), just before p_j sent a message (m, sd, , j,sn_j) at line <ref>. Otherwise, clock_j[sd] was incremented on line <ref>, when validating some msg'added toafter p_j received a (first) message (msg'.m,sd,,f, clock_f[sd]) from p_f. Because the messages () are fifo-broadcast (hence they aredelivered in their sending order), p_sd sent message (msg.m, sd, ,sd, ) before (msg'.m, sd, clock_j[sd],sd,clock_j[sd]), and all other processes only forward messages, p_j received a message (msg.m, sd, ,-,-) from p_f before the message (msg'.m, sd, clock_j[sd],-,-). At that time, > clock_j[sd], so the previous case applies.After p_j broadcasts its message (m, sd, ,j, sn_j) on line <ref>, there is a msg∈_j with ts(msg) = ⟨ sd, ⟩, until it is removed on line <ref> and clock_j[sd] ≥. Therefore, one of the conditions at lines <ref> and <ref> will stay false for the stamp ts(msg) and p_j will never execute line <ref> with the same stamp ⟨ sd, ⟩ later. Let p_i be a process thatscd-delivers a set ms_i containing a message m and laterscd-delivers a set ms'_i containing a message m'.No process p_jscd-delivers first a set ms'_j containing m'and later a set ms_j containing m.Let us suppose there are two messages m and m' and two processes p_i and p_j such that p_i scd-delivers a set ms_i containing m and later scd-delivers a set ms'_i containing m' and p_j scd-delivers a set ms'_j containing m' and later scd-delivers a set ms_j containing m.When m is delivered by p_i, there is an element msg∈_i such that msg.m = m and because of line <ref>, p_i has received a message (m,-,-,-,-) from more than n/2 processes.* If there is no element msg'∈_i such that msg'.m = m', since m' has not been delivered by p_i yet, p_i has not received a message (m',-,-,-,-) from any process (lines <ref> and <ref>). Therefore, because the communication channels are FIFO, more than n/2 processes have sent a message (m,-,-,-,-) before sending a message (m',-,-,-,-).* Otherwise, msg'∉_i after line <ref>. As the communication channels are FIFO, more than n/2 processes have sent a message (m,-,-,-,-) before a message (m',-,-,-,-). Using the same reasoning, it follows that when m' is delivered by p_j, more than n/2 processes have sent a message (m',-,-,-,-) before sending a message (m,-,-,-,-).There exists a process p_k in the intersection of the two majorities, that has both sent a message (m',-,-,-,-) before sending (m,-,-,-,-) and sent a message (m',-,-,-,-) before sending (m,-,-,-,-). However, by Lemma <ref>, p_k can only send one message (m',-,-,-,-) and one message (m,-,-,-,-), which leads to acontradiction.If a message (m, sd, ,i, sn_i) is fifo-broadcast by a non-faulty process p_i, this processscd-delivers a set containing m.Let p_i be a non-faulty process. For any pair of messages msg and msg' ever inserted in ,let ts = ts(msg) andts' = ts(msg'). Let→_i be the dependency relation definedas follows: ts →_i ts' def=|{j : msg'.cl[j] < msg.cl[j] }| ≤n/2 (i.e. the dependency does not exist if p_i knows that a majority of processes have seen the first update –due to msg'– before the second –due to msg–). Let →_i^⋆ denote the transitiveclosure of →_i.Let us suppose (by contradiction) that the timestamp ⟨ sd, ⟩ associated with the message m (carried by the protocol message (m, sd, ,i, sn_i) fifo-broadcast by p_i), has an infinity of predecessors according to →_i^⋆.As the number of processes is finite, an infinity of these predecessors have been generated by the same process, let us say p_f.Let ⟨ f, sn_f(k) ⟩_k∈ℕ be the infinite sequence of the timestamps associated with the invocations of the () issued by p_f. The situation is depicted by Figure <ref>.As p_i is non-faulty, p_f eventually receives a message (m,sd, , i, sn_i), which means p_f broadcast an infinity of messages (m(k), f, sn_f(k), f, sn_f(k)) after (m, sd, , f, sn_f). Let ⟨ f, sn_f(k1) ⟩ and ⟨ f, sn_f(k2) ⟩ be the timestamps associated with the next two messages sent by p_f, with sn_f(k1) < sn_f(k2).By hypothesis, we have ⟨ f, sn_f(k2) ⟩→_i^⋆⟨ sd, ⟩.Moreover, all processes received their first message (m, sd, , -,-) before their first message (m(k), f, sn_f(k), -,-), so ⟨ sd, ⟩→_i^⋆⟨ f, sn_f(k1) ⟩. Let us express the path ⟨ f, sn_f(k2) ⟩→_i^⋆⟨ f, sn_f(k1) ⟩:⟨ f, sn_f(k2) ⟩ = ⟨ sd'(1), sn'(1) ⟩→_i ⟨ sd'(2), sn'(2) ⟩→_i …→_i ⟨ sd(m), sn'(m) ⟩ = ⟨ f, sn_f(k1) ⟩. In the time interval starting when p_f sent the message (m(k1), f, sn_f(k1), f, sn_f(k1)) and finishing when it sent the the message (m(k2), f, sn_f(k2), f, sn_f(k2)), the waiting condition of line <ref> became true, so p_f scd-delivered a set containing the message m(k1), and according to Lemma <ref>, no set containing the message m(k2). Therefore, there is an index l such that process p_f delivered sets containing messages associated with a timestamp ⟨ sd'(l), sn'(l) ⟩ for all l'>l but not for l'=l. Because the channels are FIFO and thanks to lines <ref> and <ref>, it means that a majority of processes have sent a message (-, sd'(l+1), sn'(l+1),-,-) before a message (-, sd'(l), sn'(l),-,-), which contradicts the fact that ⟨ sd'(l), sn'(l) ⟩→_i ⟨ sd'(l+1), sn'(l+1) ⟩.Let us suppose a non-faulty process p_i has fifo-broadcast a message (m, sd, ,i, sn_i) (line <ref>).It inserted a quadruplet msg with timestamp ⟨ sd, ⟩ on line <ref> and by what precedes, ⟨ sd, ⟩ has a finite number of predecessors ⟨ sd_1, sn_1⟩, …, ⟨ sd_l, sn_l⟩ according to →_i^⋆.As p_i is non-faulty, according to Lemma <ref>, it eventually receives a message (-,sd_k, sn_k,-,-) for all 1≤ k ≤ l and from all non-faulty processes, which are in majority.Let 𝑝𝑟𝑒𝑑 be the set of all quadruplets msg' such that ⟨ msg'.sd, msg'.⟩→_i^⋆⟨ sd, ⟩.Let us consider the moment when p_i receives the last message (-,sd_k, sn_k, f, sn_f) sent by a correct process p_f.For all msg'∈𝑝𝑟𝑒𝑑, either msg'.m has already been delivered or msg' is inserted _i on line <ref>.Moreover, no msg'∈𝑝𝑟𝑒𝑑 will be removed from _i, on line <ref>, as the removal condition is the same as the definition of →_i.In particular for msg' = msg, either m has already been scd-delivered or m is present in _i on line <ref> and will be scd-delivered on line <ref>.If a non-faulty process scd-broadcasts a message m, it scd-delivers a message set containing m.If a non-faulty process scd-broadcasts a message m, it sends a message (m, i, ,i,) on line <ref>, so it scd-delivers a message set containing m by lemma <ref>. If a non-faulty process scd-delivers a message m, everynon-faulty process scd-delivers a message set containing m.Suppose a non-faulty process p_i scd-delivers a message m. At line <ref>, there is msg∈_i such that msg.m = m. At line <ref>, msg∈_i, andmsg was inserted in _i at line <ref>, just before p_i sent message (m, sd, ,i, sn_i). By Lemma <ref>, every non-faulty process p_j sends a message (m, sd, ,j, sn_j), so by Lemma <ref>, p_j scd-delivers a message set containing m.Algorithm <ref> implements the -broadcast communication abstraction in _n,t[t<n/2]. Moreover, it requires O(n^2) messages per invocation of (). The proof follows from Lemma <ref> (Validity),Lemma <ref> (Integrity),Lemma <ref> (MS-Ordering),Lemma <ref> (Termination-1), and Lemma <ref> (Termination-2).TheO(n^2) message complexity comes from the fact that, due to the predicates of line <ref> and <ref>, each application message m is forwarded at most once by each process (line <ref>). The next corollary follows from (i) Theorems <ref>and <ref>, and (ii)the fact that the constraint (t<n/2) is an upper bound on the number of faulty processes to build a read/write register (or snapshot object) <cit.>.Algorithm <ref> is resiliency optimal.§ BUILDING AN MWMR ATOMIC REGISTER ON TOP OF _N,T[]This appendix shows the genericity dimension of Algorithm <ref>. It presents trivial simplifications of it, which build MWMR atomic registers and MWMRsequentially consistent registers. §.§ The algorithm Letdenote the MWMR atomic read/write register that is built. The algorithm that builds it is a trivial simplification of the snapshot Algorithm <ref>, namely its projection on a single MWMR atomic register. is now locally representedby alocal variable reg_i and the associated timestamp ts_i initialized to ⟨ 0,-⟩.The message sent at Line <ref> is now (v,⟨ ts_i.date_i+1,i⟩), and the predicate of line <ref> simplifies to “there are messages ())”.§.§ Proof of the algorithmThe proof is a simplified version of the proof of Theorem <ref>. For self-completeness, we give here its full proof even if some parts of it are “cut-and-paste” of parts of proofs given in Section <ref>.As in that section,let us associate a timestamp ts() with each operation ()as follows (this is the place where the proof is simplifiedwith respect to a snapshot object). * Case()=(v). Let p_i be the invoking process; ts() is the timestamp of vas defined by p_i at line <ref>, i.e., ⟨ ts_i.date+1, i⟩. * Case ()=(). Let w be the value returned by the read;ts() is then the timestamp associated with w at line <ref> by its writer. Let 1 and 2 be any two operations. The relation ≺ on the whole set of operations is defined as follows:1 ≺2 if 1 terminated before 2 started.It is easy to see that ≺ is a real-time-compliant partial orderon all the operations. The reader can easily check that the statement and theproof of Lemma <ref> (applied to the termination of read and write operations), and Lemma <ref> (applied to the total order on the write operations, compliant with both the sequential specification of a register, and their real-time occurrence order)remain valid for the algorithm suited to an MWMR atomic read/write register.The next lemma addresses the read operations (which are simpler to manage than snapshot operations). The read/write registeris linearizable. Let us now insert each read operation in the previous (real time compliant) total orderas follows.Let 1() be a read operationwhose timestamp is ⟨ date1, i ⟩. This operation is inserted just after the write operation1()that has the same timestamp (this write wrote the valueread by 1()). Let us remark that, as1() obtained the value timestamped ⟨ date1, i ⟩,itdid not terminatebefore1() started.It follows that the insertion of 1() into the total order cannot violate the real-time order between 1() and1().Let us consider the operation2() that follows 1() in the write total order.If 1() ≺2(), the placement of 1() in the total order is real-time-compliant. If(1() ≺2()), due to the timestamp obtained by1(), we cannothave 2()≺1(). It follows that in thiscase also, the placement of 1() in the total order is real-time-compliant.Finally, let us consider two read operations 1() and 2() which have the same timestamp⟨ date, i ⟩ (hence, they read from the same write operation, say 1()).Both are inserted after1() in the order of their invocations (if1() and 2() started simultaneously, they areinserted according to the order on the identities of the processes that invoked them). Hence,the read and write operations are linearizable, which concludes the proof of the lemma. The read/write registeris an MWMR atomic read/write register.The proof follows from Lemma <ref>, Lemma <ref>, and Lemma <ref>. §.§ The case of an SWMR atomic registerWhen the registercan be written by a single process (say p_k),the algorithm simplifies. The timestamps disappear at all processes, and as only the writer p_k can invoke .(),itmanages a simpledate date_k (which is actually a sequence number). The modifications are:* Line <ref> becomes: date_k← date_k+1;(v,date_k). * The lines <ref>-<ref> become: aaaaaaaaaaaaaaaaaāaāaaāaaaaa if (̄there are messages ()) then l̄ēt̄ date be the maximal date inthe messages () received; reg_i← the value associated with dateend if.Let us remark that, due to the Boolean _k, the writer p_k scd-delivers message sets containing at most one message ().§.§ On sequentially consistencyThe case of an MWMR sequentially consistent register As indicated in the Introduction, sequential consistency was introduced in <cit.>. It is atomicity minus therequirement stating that “ifan operation op1 terminates before an operation op2 starts, thenop1 must appear before op2 in the sequence of the read and write operations”. As noticed in <cit.>, sequential consistency can be seen as a weakened form of atomicity, namely lazy linearizability. The composition of sequentially consistent registers is investigated in <cit.>. The algorithm for sequential consistency presented in <cit.> and Algorithm <ref> are based on similar principles.The constraint (t<n/2) is alsoa necessary and sufficient condition to implementa sequentially consistent read/write register in _n,t[∅]. The reader can check that an algorithm building aa sequentially consistent MWMR read/write register can easily be obtained from Algorithm <ref> as simplified in Section <ref>. One only needs to suppress the synchronization messages () which ensure the compliance with respect to real-time. The concerned lines arelines <ref>-<ref> (read synchronization), and lines <ref>-<ref> (write synchronization). In a simple way, thisshows the versatility dimension of Algorithm <ref>.From sequential consistency to atomicity Given a sequentially consistent snapshot object, Algorithm <ref> builds the -broadcast communication abstration.(As the reader can check, this follows from the fact that, when looking at its proof, this algorithm relies only on the fact that the operations on the snapshot object can be totally ordered.)Hence, using on top of it the -broadcast-based Algorithm <ref>, we obtain an atomic snapshot object.It follows that, thanks to SCD-broadcast, the algorithms presented in the paper allow a sequentially consistent snapshot object to be transformed into an atomic snapshot object (and it is known that –differently from sequential consistent objects–atomic objects are composable for free <cit.>).
http://arxiv.org/abs/1702.08176v1
{ "authors": [ "Damien Imbs", "Achour Mostefaoui", "Matthieu Perrin", "Michel Raynal" ], "categories": [ "cs.DC" ], "primary_category": "cs.DC", "published": "20170227080703", "title": "Another Look at the Implementation of Read/write Registers in Crash-prone Asynchronous Message-Passing Systems (Extended Version)" }
#1 1Projected Hartree-Fock as a Polynomial of Particle-Hole Excitations and Its Combination With Variational Coupled Cluster Theory Gustavo E. Scuseria December 30, 2023 =============================================================================================================================== [0.5ex]1pt In many statistical problems, several estimators are usually available for interval estimation of a parameter of interest, and hence, the selection of an appropriate estimator is important. The criterion for a good estimator is to have a high coverage probability close to the nominal level and a shorter interval length. However, these two concepts are in opposition to each other: high and low coverages are associated with longer and shorter interval lengths respectively. Some methods, such as bootstrap calibration, modify the nominal level to improve the coverage and thereby allow the selection of intervals based on interval lengths only. Nonetheless, these methods are computationally expensive. In this paper, we propose an index which offers an easy to compute approach of comparing confidence interval estimators based on a compromise between the coverage probability and the confidence interval length. We illustrate that the confidence interval index has range of values within the neighbourhood of the range of the coverage probability, [0,1]. In addition, a good confidence interval estimator has an index value approaching 1; and a bad confidence interval has an index value approaching 0. A simulation study was conducted to assess the finite sample performance of the index. The proposed index is illustrated with a practical example from the literature. [0.5ex]1pt MSC: 62F99 , 62G99Keywords:Confidence interval;Empirical coverage probability ;Confidence interval length;Bootstrap calibration1.45§ INTRODUCTION Most statistical problems involve the estimation of some unknown parameter, θ, of a population from an observed sample using an estimator, θ̂ <cit.>. In order to provide a complete description of the information in the sample about θ, a confidence interval is usually constructed. The key concepts associated with confidence intervals are the coverage probability and interval length. The former is the proportion of times the confidence interval encloses θ under many replications; and the latter refers to the difference between the upper and the lower confidence limits. These two key concepts are related: longer confidence intervals have higher coverage probabilities approaching the nominal level and shorter confidence intervals have lower coverage probabilities. In statistics, one is often faced with a number of confidence intervals for a parameter arising from different estimators or methods of estimation, and a decision has to be made on the “best" method of estimation. Since these two key concepts are in opposition to each other, that is, better coverage probability goes with weaker length and vice versa,it is useful to have some practical way of combining these measures. In this paper, such an easy to compute measure is proposed and applied to two well-known problems as well as a practical example from the literature.Suppose we have a sample, x={x_1,...,x_n}, drawn from an unknown distribution function, F. Let ℓ(α;x)=(ℓ_L(α;x),ℓ_U(α;x)) be the α-level confidence interval for the unknown parameter θ. Also, denote by L(α), the average of the confidence interval length between the upper confidence limit ℓ_U(α;x) and the lower confidence limit, ℓ_L(α;x). Furthermore, let the coverage probability be given by η(α)=P(θ∈ℓ(α;x)). The estimation of η(α)is an important issue for statisticians and the goal is to obtain a confidence interval estimator with estimated coverage probability (usually referred to as empirical coverage probability), η̂(α), equal to the nominal coverage, 1-α <cit.>. However, it is often the case that η̂(α) is not exactly equal to 1-α. A requirement for a good confidence interval estimator is to have a short interval length and a coverage probability equal to or approximately equal to the nominal coverage. As a result, confidence interval estimator selection can be done by a comparison of the intervals' coverage probabilities and lengths. However, this can be subjective especially if several interval estimators are involved and a compromise is sought between coverage probability and interval length.A handful of methods to overcome the difficulties in comparing confidence interval estimators rely on an adjustment of the interval lengths such that each interval gives a coverage probability close or equal to the nominal level, 1-α. In that case, the comparison of the confidence interval estimators can be done using the confidence interval lengths only. Examples of these methods in the literature include bootstrap calibration <cit.> and prepivoting <cit.>. The basic idea underlying the bootstrap calibration is to obtain β  (β<α) such that the resultant interval's coverage probability equals 1-α i.e. η(β)=1-α.Also, prepivoting involves the transformation of the lower (and/or upper)confidence level(s) by using its estimated bootstrap distribution function. This has an important application in reducing the coverage error of bootstrap confidence intervals. In addition, prepivoting can be iterated and this automatically moves the empirical coverage closer to the desired level, 1-α. However, in practice, these bootstrap-based procedures generally require computationally costly nested bootstraps (i.e. bootstrapping from the bootstrapped data) from the data. For example, in the case of a double bootstrap, the first bootstrap sample needs B_1 resamples from the data and then resampling B_2 times from each of single bootstrap samples. Thus, the computational cost involves B_1× B_2 samples in addition to the confidence interval calculations. Also, in <cit.>, the authors were constrained in terms of the number of estimators for impulse responses in large Vector Autoregressive Models due to the prohibitive computational cost. Even for the limited confidence interval estimators considered, in cases like the bias-corrected and accelerated bootstrap method, the computing time required for the evaluation of the estimators was over one year. Furthermore, <cit.> shows that the coverage precision increases with increasing levels of resamples. Thus, the level of resamples can be done until a point where the coverage is approximately equal to 1-α. At this stage, the comparison of interval estimators can be done on the interval lengths only. However, in applications, this is limited by the huge computing power and time needed for such levels of bootstrap. As a result, some work has been done to reduce the computational burden involved in the use of these bootstrap based procedures. Among these, <cit.> and <cit.> proposed a linear and a nonlinear interpolation respectively to reduce the level of bootstrap replications in calibration. Also, <cit.> provides an algorithm for the double bootstrap, illustrated above, to reduce the B_1× B_2 total resamples to an appreciable level. These algorithms have varying degrees of success in implementation. Nevertheless, in practical application of these methods, a determination is needed of the benefits of higher levels of resamples against the computational cost.In this paper, we propose an index which offers a straightforward approach of comparing confidence interval estimators without the use of Monte Carlo simulation or analytical derivations. The index is based on a compromise between the coverage probability and the confidence interval length. The need for such an index arose from a recent very large simulation study of comparing different estimators of tail index in extreme value theory. Running the simulation to obtain a variety of confidence intervals based on the different estimators, was already computationally very intensive. Applying a further computationally intensive calibration or double bootstrap, would have been too costly in terms of computing time and resources. Hence, the proposed index was developed as a computationally inexpensive compromise between coverage probability and confidence interval length.The rest of the paper is organised as follows. In section <ref>, the bootstrap calibration method is presented. The proposed confidence interval index is presented in Section <ref>. In Section 4, we conduct a simulation study to assess the finite sample performance of the proposed index on four popular confidence interval estimators of the mean from a symmetric and skewed distribution. In addition, several confidence interval estimators of the binomial proportion are examined using the index. Section <ref> deals with an application of the index on a study of the performance of several confidence interval estimators of the coefficient of variation from <cit.>.Finally, we present some concluding remarks in Section <ref>. § BOOTSTRAP CALIBRATION <cit.> catalogs some procedures for generating confidence intervals with improved coverage probabilities. These include Edgeworth expansion (analytical) and bootstrapping (simulation). The author states, with references, that given that the Edgeworth expansion and bootstrap procedures are valid, both produce results that have the same asymptotic error rates. In particular, the bootstrap procedure implements the Edgeworth correction through simulation in an automatic fashion. In view of this, we consider the bootstrap calibration method of <cit.> only. Letx_1,…,x_n be a random sample of size n from the distribution function F. We consider the estimation of the 100(1-α)% 2-sided normal-theory confidence interval of the mean, θ=θ(F), given by [θ_L, θ_U]=[θ̂+z_α/2σ̂/√(n), θ̂-z_α/2σ̂/√(n)].Here, σ̂ and z_α/2=Φ^-1(α/2) are the unbiased estimate of the variance and the quantile of the standard normal distribution, Φ, respectively.If F is normally distributed and n large, the estimated coverage probability, η̂(α), will be close to 1-α. However, for smaller n and non-normal distributions, η̂(α) may differ substantially from 1-α. The idea of calibration introduced by <cit.> is to replace α with β  (β<α) such that η̂(β)≊1-α.This invariably implies several or possibly infinite search for β, satisfying (<ref>), and each of this searches is accompanied by bootstrapping samples to obtain η̂(β). Thus, the method seems impractical.However,<cit.> and <cit.> respectively proposed a linear and a smooth nonlinear interpolation, to reduce the level of bootstrap resampling needed for the seemingly infinite search for β to be replaced with just one level. Thus, the calibration is obtained by generating B bootstrap replications.Let x^*={x_1^*,…,x_n^*} be a bootstrap sample from x={x_1,x_2,…,x_n}.Also, let θ̂^*=θ̂(x^*), σ̂^*=σ̂(x^*) andt^*_j=√(n)(θ̂^*_j-θ̂)/σ̂^* the t statistic computed from the jth bootstrap sample. <cit.> defines λ̂_j=1-Φ(|t^*_j|) and β is taken as the α-quantile of (λ̂_1,…,λ̂_B). In addition, the author argues that the calibration method above is equivalent to the bootstrap root method of <cit.>.The implementation of the calibration method leads to intervals with error rates comparable to bootstrap t, and the accelerated bias-corrected percentile method. However, it is known that these confidence interval estimation methods have limitations with respect to coverage probability and interval lengths <cit.>.§ THE INDEX We introduce an index which offers a straightforward approach and avoids the computational burden in the bootstrap-based methods in comparing confidence interval estimators. In addition, the index abstracts the information provided by the confidence interval length and coverage probability, thereby making it a standalone value for comparative purposes. The idea behind the proposed index was to obtain a value that is simple, easy to interpret, and takes into account confidence interval length and coverage probability. In addition, the index is expected to have a range within the neighborhood of the desired coverage probability and hence, can easily be reported together (e.g. graphically) for comparative purposes. Consider R confidence interval estimators and let η={η_1,…,η_R}' and L={L_1,…,L_R}' denote the vectors of realised coverage probabilities and average interval lengths respectively.The confidence interval index, I, is defined as I(L_j,η_j;α)=k_α(1-1/2(1+H(η_j;α)/1+(η_j/1+L_j))),  L≥0,  0≤η_j≤1,  j=1,…,R,where k_α is a constant depending on the significance level, α. Here, H is a loss function which describes the penalty incurred by the deviation of the empirical coverage probability from 1-α. In this study, we choose H as a simple absolute loss function defined by H(η_j;α)=|1-α-η_j|,  0≤η≤ 1, j=1…,R.Consequently, using (<ref>), the scaling parameter is taken ask_α=4-2α/3-2α,to obtain the range of values of I(L_j,η_j;α) within the neighbourhood of the desired coverage probability. To derive the range of values of the index, I(L_j,η_j;α), we examine the limit at four extreme cases: * L_j→ 0, η_j → 0  ⟹ I(L_j,η_j;α) →k_αα/2. * L_j→∞, η_j → 0  ⟹ I(L_j,η_j;α) →k_αα/2. * L_j→∞, η_j → 1-α ⟹ I(L_j,η_j;α) →k_α/2. * L_j→ 0, η_j → 1-α ⟹ I(L_j,η_j;α) → 1. Thus, I(L_j,η_j;α) has a range [k_αα/2,  1 ].A bad confidence interval estimator (i.e. an interval with low coverage probability and large interval length) corresponds to cases I and II, with I(L_j,η_j;α) → k_αα/2 . On the other hand, a good confidence interval estimator(i.e. case IV)hasI(L_j,η_j;α) → 1. We note that the range of I(L_j,η_j;α) can be transformed to the desirable range of the coverage probability, [0,1], via an affine function f(x)=2x/(2-k_αα)-k_αα/(2-k_αα), for increased interpretability. From the aforementioned limits, we conclude that generally a higher value of the index means a better confidence estimator of the parameter θ. That is, such an estimator has coverage probability close to the nominal value and shorter interval lengths. In addition, as the index penalises for deviation from the nominal level and larger interval lengths, estimators with small coverage probabilities and/or large interval lengths generally have smaller confidence interval index. Therefore, in using this index for comparative purposes, the estimator with largest index value will be chosen ahead of the smaller values. In the subsequent sections, we take α=0.05 and, thus, I(L_j,η_j)∈ [0.034, 1.000],  j=1,…,R. We note that, other loss functions can be chosen for this purpose, for example, quadratic, a Huber function <cit.>, among others and appropriate values of k_α determined analytically. For example, if we consider the case of the square loss function, H(η_j;α)=(1-α-η_j)^2,  0≤η≤ 1, j=1…,R. The value of k_α can be taken as in (<ref>). However, the limits of I(L_j,η_j;α) corresponding to cases I, II, III and IV are respectively k_αα(2-α)/2,k_αα(2-α)/2, k_α/2 and 1. Thus, the range of the resulting I(L_j,η_j;α), is [α(2-α)^2/(3-2α),1]. In the case of the two loss functions considered, the rationale behind the choice of k_α, is to obtain a range of values of I(L_j,η_j;α) in the neighbourhood of the range of the coverage probability for ease of interpretation. Lastly, the effect of the choice of loss function is reflected in the range of values of I(L_j,η_j;α).§ SIMULATION STUDY In this section, we study the performance of the confidence interval index, I, through a simulation study. In this regard, we assess the performance of several confidence interval estimators of the mean from a symmetric and skewed (or asymmetric) distributions. In addition, several estimators of the binomial proportion are assessed using I.§.§ Confidence Interval Index for the Mean We present a simulation study on the estimation of the mean from a symmetric and a skewed distribution in the two subsections that follow. In the former, we considered samples generated from a normal distribution and the latter from a lognormal distribution.To study the behaviour of the estimators, samples of size,n  (n=10, 50, 100,200, 500, 1000) were generated from a normal or a lognormal distribution with mean, μ, and variance, σ^2. The parameter of interest is the population mean, μ, which is estimated by the sample mean, x̅. The 95% two-sided confidence interval ofμwas constructed using four different methods, namely, the normal theory interval, the Johnson t interval ( unlike the normal theory interval, adjust for positive and negative skewness in a data set by shifting the endpoints right and left respectively. The Johnson t interval is given by (x̅+ κ̂_3/6√(n)(1+2t_α^2))± t_αs/√(n), where κ̂_3 is the estimate of the population skewness E(X-μ)^3/σ^3,  t_α is the α quantile of the t distribution with n-1 degrees of freedom and s is the sample standard deviation.) <cit.> and the bootstrap-based intervals-the bootstrap percentile and theBias-Corrected and accelerated (BCa) <cit.>. The following procedure was used to compute the index and its summary statistics: A1. Generate N (N=1000) samples each of size nfrom N(μ,σ^2). A2. Draw B (B=1000) bootstrap samples from each sample in A1 and use these to compute B bootstrap confidence intervals (i.e. bootstrap percentile and the BCa) of the mean. Compute the average of the B interval lengths, L, and the empirical coverage probability η̂ for both bootstrap interval types separately. A3. Compute the confidence interval for the mean using the normal theory interval and Johnson t interval using each of the N samples in A1. Calculate the average of the N interval lengths, L, and the empirical coverage probability η̂ for the two interval types. A4. Repeat A1-A3 a large number of times R (R=5000), to obtain the pairs {(η̂_1,L_1),…,(η̂_R,L_R)} and, hence, the confidence interval index, I^(i,j), i=1,…,4, j=1,…,R. A5. Compute summary statistics for the indexes, I^(i,.) i=1,…,4. §.§.§ Mean of a Symmetric DistributionTable <ref> shows the summary statistics of the index for the four interval types computed for observations from N(2,1). It can be seen that, as the sample size increases, I tends to 1: the confidence interval estimators improve with increasing sample size. This is expected in line with the weak law of large numbers:x̅ approaches μ as n→∞. In addition, for smaller sample sizes (i.e. n≤50), the Johnson t interval has the largest I values in most cases followed by the normal interval. Generally, these two estimators provide better confidence intervals as they have larger index values for measures of location, smaller variability, large negative skewness andlarge peakedness. In the case of large sample sizes (i.e. n>50), there is not much difference between the performance of the normal theory and Johnson t interval estimators of the mean. The bootstrap percentile is the next best confidence interval estimator of the mean followed by the BCa interval estimator based on the summary statistics. Since the sample mean is an unbiased estimator of the population mean, the percentile interval is expected, as shown in the simulation study, to give better intervals in terms of coverage and interval lengths. We remark that the simulation was carried out for larger sample variances and the results show wider interval length leading to smaller values of the index. Due to space consideration, the results are not included but can be obtained from the authors upon request.Furthermore, we consider the performance of I in relation to bootstrap calibration of <cit.> and <cit.>. Again, the estimation of the mean of a normal distribution is considered. Here, we considered smaller sample sizes where the empirical coverage probability tends to be smaller than the nominal level, 1-α. In that case, calibration can be used to increase the empirical coverage probability to approximately equal to 1-α. Our aim in this case is to assess the conclusions reached for calibrated intervals in relation to the index. The results of the simulation study for observations from N(2,1) are presented in Table <ref>.For smaller sample sizes (n≤20), the Johnson t interval has empirical coverage probabilities close to the nominal level of 0.95. Calibration of such interval leads to overestimation of the coverage probability. Therefore, we failed to calibrate interval estimators with empirical coverage probability close to 0.95.The performance of the Johnson t confidence interval estimator is expected as it adjusts for the skewness in the data (in particular for small sample sizes where skewness is prevalent). However, this interval consistently has the largest interval length compared with the normal theory, bootstrap percentile and BCa. The index values for the Johnson t interval are the largest, and thus, can be considered as the most appropriate confidence interval estimator of the mean.As the sample size increases, the normal theory intervaland the Johnson t interval estimators outperform the other intervals in terms of coverage probability. Also, the normal theory interval estimator has interval lengths fairly competitive to the bootstrap-based intervals and outperforms the Johnson t interval. This can be seen from the index of the normal theory interval having the largest values. In general, we note that calibration as demonstrated in Table <ref> does not necessarily bring the empirical coverage up to the desired level of 1-α. Thus, calibration, although expensive, is not always attractive. We find that the conclusions of the confidence interval index for the non-calibrated intervals agree mostly with that of the calibrated intervals. Again, if the coverage probability is close to the nominal value, calibration leads to overestimation of the coverage probability. However, the index penalises such intervals for the deviation from the nominal level, and thus, discriminates good intervals from the bad ones. §.§.§ Mean of a Skewed Distribution In this section, we consider the performance of the confidence interval index for the estimation of the mean of a skewed distribution. Observations were generated from a lognormal distribution with mean 0. Since the skewness of the lognormal distribution depends only on the variance, we took variances of 3, 2 and 0.2 corresponding to skewness of 23.732, 6.185 and 1.516 respectively. The results for these three values are shown in Tables <ref>, <ref> and <ref> respectively. Firstly, for a largely skewed distribution, it is evident that the normal theory interval has the smallest average confidence interval indexes but relatively larger standard deviations. The normal intervals are symmetric, and hence, has a challenge when it is used to provide a confidence interval for the mean of a heavily-skewed distribution. On the other hand, the BCa interval records the best performance as it has the largest average confidence interval indexes. This results from the fact that for a heavily-skewed distribution, large bias is expected but the BCa interval corrects for bias and skewness and hence, provides better intervals that enclose the actual parameter being estimated.Secondly, in the case of a moderately skewed distribution, the Johnsons-t interval estimator is by far the best estimator of the mean of the lognormal distribution with larger index values. This is followed by the BCa interval estimator especially for smaller sample sizes where skewness is high. However, as the sample size increases, the normal interval estimator improves and surpasses the BCa with larger confidence interval indexes.Thirdly,for the case of low skewness, i.e. σ^2=0.2, the performance relatively follows a similar pattern but with some notable differences. The Johnson t remains the best estimator but the performance of the normal theory interval improves significantly for large sample sizes giving large values of the index. At n=1000, there is not much difference between the two estimators. However, the performance of the BCa interval reduces as its index values are smaller compared with the other estimators. This may be attributed to the low skewness of the distribution, and hence, being close to a symmetric distribution similar to the normal distribution presented in Section <ref>. Lastly, the index values increase with decreasing variance of the lognormal distribution (i.e. decreasing skewness) signifying better performances than for the case of larger variances (i.e. increasing skewness). This is to be expected as the confidence interval estimators give intervals that have better coverage and smaller interval lengths when skewness is small. This is in conformity with earlier studies that compared estimators of the mean of a skewed distribution based on interval length and coverage probability <cit.>. Therefore, in general, the confidence interval index works well for selecting confidence interval estimators of the mean of a skewed distribution.§.§ The Confidence Interval Index for the Binomial ProportionIn this section, we consider the estimation of the binomial proportion by sampling from a binomial distribution. This issue usually arises in applied statistics e.g. incidence rates (in medical science), proportion of defective items (in manufacturing), among others. In addition, unlike the confidence interval for the mean, that of the proportion enables us to measure the performance of the index on non-symmetric intervals.Assume X is binomially distributed with parameters n and p, written, X∼ Bin(n,p). Here, the estimator, p̂, is the maximum likelihood estimator given by p̂=X/n. This estimator is consistent and, since, the expected value of X is equal to np, it is also unbiased. The most basic form of an interval estimate for the proportion, p, is the Wald interval, p̂± Z_α/2√(p̂(1-p̂)/n)<cit.>. The properties of this interval estimator have been studied extensively in the literature. Its performance is known to be erratic with respect to coverage probability. In addition, recommendations concerning the values of n and p where this interval is appropriate are conflicting <cit.>. Several attempts have been made to obtain better confidence interval estimators of the binomial proportion. For example, <cit.> proposed an amendment of the interval (<ref>) by defining p̂ as (X+2)/(n+2). Some further modifications and other intervals that are not based on the normality assumption are presented in Table <ref>. In the present study, we generated samples of size, n, and proportion, p, from a binomial distribution. Each estimator in Table <ref> is used to obtain a confidence interval for p. We repeat the process R  (R=1000) times and obtain the average confidence interval length and coverage probability. We then compute diagnostic checks on these intervals using the index I in (<ref>). Tables <ref>-<ref> show the results of the simulation for combinations of n and p.We find that the confidence interval length improves with increasing sample size. In addition, most of the empirical coverage probabilities of the estimators become much closer to the nominal level of 0.95 as the sample size increases. In particular, the Pois estimator for estimating p=0.1, improves drastically from η̂=0.556 toη̂=0.942 respectively for sample sizes 10 and 100. Together with the corresponding confidence interval lengths, the values of index for the Pois estimator increases from 0.6940 (compared with the best estimator's index of 0.9395) to 0.9715 (joint best with Arc.CC) for sample sizes 10 and 100 respectively. However, for other values of p (more generally p>0.1), the Pois estimator overestimates the coverage probability and has larger confidence intervals relative to the other estimators. Therefore, the Pois estimator has smaller index values compared with the other estimators, and hence, is not appropriate for the estimation of p.Furthermore, the Exact estimator overestimates the coverage probability in all cases. In addition, it has large confidence interval lengths and these are shown in its index values. This is consistent with results reported in <cit.>. In most cases, estimators such as Wilson, AgreC, Ag.add4 and midP have relatively good coverage properties and interval lengths and are shown in their I values usually approaching 1.In general, for the estimation of a proportion, the index is able to distinguish between estimators that are appropriate or not based on their interval lengths and coverage probabilities. § APPLICATION To illustrate the application of our index, we consider the paper by <cit.>. The authors compared several confidence interval estimators for the coefficient of variation (CV). The coefficient of variation is defined as the variability of a random variable relative to its mean. It is usually expressed as a percentage. The confidence interval estimators of the CV were compared based on their interval lengths and the empirical coverage probabilities. The authors used separate plots for the coverage probabilities and the interval lengths across different sample sizes, CV values and distributions. We take a different approach in this paper by constructing plots showing simultaneously the coverage probabilities and the confidence interval index, I.The various confidence intervals considered and their abbreviations are presented in Table <ref>. We compute the confidence interval indexes for the estimators in Table 8 using the values in Table 4 of <cit.>. In addition, we assess the conclusions reached in that paper with that of the computed confidence interval indexes. The confidence interval indexes for each combination of n and CV are shown in Tables <ref>-<ref> in Appendix B. In addition, the plots of the coverage probabilities and the corresponding confidence interval index values are presented in Figure <ref>. We can now make inferences from the graphs and compare these with the conclusions reached in <cit.>.Firstly, it can be seen that the most visible estimator that performs badly is the S.K estimator. It has mostly low coverage probability and this is reflected in it having smaller values on the index. It must be noted that some of the corresponding interval lengths of the S.K estimator were 2 to 8 times shorter than the other interval lengths. However, having a shorter interval length with low coverage probability is not practically desirable. As the sample size increases, there is a remarkable increase in the performance of the S.K estimator especially for CV=0.5. Therefore, we can conclude that, in this case, the index discriminates the bad estimator from the good ones even though shorter interval lengths were recorded. Secondly, <cit.> concludes that “By n = 100, almost all intervals are performing at a similar level (Figure 1). All C.P intervals (C.P, Med C.P, and BS C.P) over exceeded the expected coverage probability of 95% and reached 100% and are clear outliers". From Figure <ref>, it can easily be seen fromthe bottom panel (i.e. for n=100) that the index values for these estimators are smaller compared to the other estimators: this indicates that the C.P-based estimators are inappropriate for the estimation of the CV relative to the other estimators.Thirdly, we can plot the index against sample size and CV values as an alternative to the four cases: CP against sample size; CP against CV; Interval length against n; and interval length against CV. These graphs, not shown here, lead to the same conclusions obtained in <cit.>. In general, the index values are consistent with the conclusions from the CP and the interval lengths. Therefore, the index provides a useful, but computationally inexpensive method for measuring the relative performance of the estimators of confidence intervals for CV.§ CONCLUSION In this paper, an index for measuring the performance of confidence interval estimators was proposed. The index is based on the traditional trade-off between confidence interval length and empirical coverage probability. Unlike the confidence interval length which has range, ^+,the index has range of values within that of the coverage probability. We showed that index values close to 1 indicate a good confidence interval estimator whereas values far removed from 1 indicate a bad confidence interval estimator. Thus, it can easily be superimposed on a plot of coverage probabilities to aid in the selection of estimators with good coverage probabilities and interval lengths. The index can be used alone or to complement the coverage probability for measuring the performance of confidence interval estimators. In all the simulations and practical application, we assessed the performance of estimators through the sizes of the values of the index. However, an issue of practical importance is the statistical difference between indexes. In practice, we propose that a hypothesis of equality or otherwise can be performed on any observed differences between indexes. Since the sampling distribution of the index remains an open problem, a non-parametric or the estimation of standard errors based on resampling methods can be used in such a test.99 Agresti2000 Agresti, A., Caffo, B.: Simple and Effective Confidence Intervals for Proportions and Differences of Proportions Result from Adding Two Successes and Two Failures. The American Statistician 54(4), 280–288 (2000) Agresti1998 Agresti, A., Coull, B.A.: Approximate is Better than “Exact" for Interval Estimation of Binomial Proportions. The American Statistician 52(2), 119–126 (1998) Banik2010 Banik, S., Kibria, B.M.G.: Comparison of Some Parametric and Nonparametric Type One Sample Confidence Intervals for Estimating the Mean of a Positively Skewed Distribution. Communications in Statistics -Simulation and Computation 39, 361–389 (2010) Beran1987 Beran, R. (1987). Prepivoting to Reduce Level Error of Confidence Sets. Biometrika 74(3), 457–468 Brown2001 Brown, L.D., Cai, T.T., DasGupta, A. (2001). Interval Estimation for a Binomial Proportion. Statistical Science 16(2), 101–117 Brown2002 Brown, L.D., Cai, T.T., DasGupta, A. (2002). Confidence Intervals for a Binomial Proportion and Asymptotic Expansions. The Annals of Statistics 30(1), 160–201 Curto2009 Curto, J.D., Pinto, J.C. (2009). The Coefficient of Variation Asymptotic Distribution in the Case of Non-iid Random Variables. Journal of Applied Statistics 36(1), 21–32 Efron1993 Efron, B., Tibshirani, R.J. (1993). An Introduction to the Bootstrap. Chapman and Hall, London Gulhar2012 Gulhar, M., Golam Kibria, B.M., Albatineh, A.N., Ahmed, N.U. (2012). A Comparison of Some Confidence Intervals for Estimating the population coefficient of variation: A simulation study. SORT 36(1), 45–68 Huber1992 Huber, P.J. (1992). Robust estimation of a location parameter. In: Breakthroughs in Statistics, pp. 492–518. Springer Johnson1978 Johnson, N.J. (1978). Modified t Tests and Confidence Intervals for Asymmetrical Populations Modified L Tests and Confidence Intervals for Asymmetrical Populations. Journal of the American Statistical Association 73(363), 536–544 Kilian2000 Kilian, L., Chang, P.L. (2000). How accurate are confidence intervals for impulse responses in large var models? Economics Letters 69(3), 299–307 Lee2003 Lee, S.M.S., Young, G.A. (2003). Prepivoting by Weighted Bootstrap Iteration. Biometrika 90, 393–410 Leemis1996 Leemis, L., Trivedi, K. (1996). A Comparison of Approximate Interval Estimators for the Bernoulli Parameter. The American Statistician 50(1), 1–20 Loh1987 Loh, W.Y. (1987). Calibrating Confidence Coefficients. Journal of the American Statistical Association 82(397), 155–162 Loh1988 Loh, W.Y. (1988). Discussion: Theoretical Comparison of Bootstrap Confidence Intervals. The Annals of Statistics 16(3), 972–976 Loh1991 Loh, W.Y.: Bootstrap Calibration for Confidence Interval Construction and Selection. Statistica Sinica 1(2), 477–491 (1991) Martin1990 Martin, M. (1990). On the Double Bootstrap. Tech. rep., (Report No. 347) Department of Statistics, Stanford University, California. McKay1932 McKay, A.T.: Distribution of the Coefficient of Variation and the Extended t Distribution. Journal of the Royal Statistical Society 95 (4), 695–698 (1932) Miller1991 Miller, G.E. (1991). Asymptotic Test Statistics for Coefficients of Variation. Communications in Statistics - Theory and Methods 20(10), 3351–3363 Nankervis2005 Nankervis, J.C. (2005). Computational Algorithms for Double Bootstrap Confidence Intervals. Computational Statistics & Data Analysis 49, 461–475 Panich2009 Panichkitkosolkul, W. (2009). Improved Confidence Intervals for a Coefficient of Variation of a Normal Distribution. Thailand Statistician 7(2), 193–199 Pires2008 Pires, A.M., Amado, C. (2008). Interval Estimators for a Binomial Proportion: Comparison of Twenty Methods. REVSTAT 6(2), 165–197 Sharma1994 Sharma, K., Krishna, H. (1994). Asymptotic Sampling Distribution of Inverse Coefficient-of-Variation and its Applications. IEEE Transactions on Reliability 43(4), 630–633 Vangel2012 Vangel, M.G. (1996). Confidence Intervals for a Normal Coefficient of Variation. The American Statistician 50(1), 21–26 Wilson1927 Wilson, E.B. (1927). Probable Inference, the Law of Succession, and Statistical Inference. Journal of the American Statistical Association 22(158), 209–212 Zaane2012 Zaane, B.V., Vergouwe, Y., Donders, A.R.T., Moons, K.G.M. (2012) Comparison of Approaches to Estimate Confidence Intervals of Post-test Probabilities of Diagnostic Test Results in a Nested Case-Control Study. BMC Medical Research Methodology 12(166), 1–9 toc§ APPENDIX A§ APPENDIX B
http://arxiv.org/abs/1702.08572v3
{ "authors": [ "Richard Minkah", "Tertius de Wet" ], "categories": [ "stat.ME", "stat.CO", "62F99, 62G99" ], "primary_category": "stat.ME", "published": "20170227225229", "title": "Comparison of Confidence Interval Estimators: an Index Approach" }
decorations.markings shapes,arrows decorations.pathreplacing arrows,positioning -20pt0pt 0pt 0pt6.25in 9 in .875in 5pt plus 1pt = 1.5ex equationsection tablesection
http://arxiv.org/abs/1702.08404v1
{ "authors": [ "Thomas W. Grimm", "Kilian Mayer", "Matthias Weissenbacher" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170227180624", "title": "Higher derivatives in Type II and M-theory on Calabi-Yau threefolds" }
[pages=1-last]paper
http://arxiv.org/abs/1702.08248v2
{ "authors": [ "Olivier Bachem", "Mario Lucic", "Andreas Krause" ], "categories": [ "stat.ML", "cs.DC", "cs.DS", "cs.LG", "stat.CO" ], "primary_category": "stat.ML", "published": "20170227120301", "title": "Scalable k-Means Clustering via Lightweight Coresets" }
_3γ j_0 𝒰 ℰlemmaLemmapropositionPropositiontheoremTheoremdefinitionDefinitionexampleExamplecorollaryCorollaryproblemProblemexExerciseRemarkRemarkγŁ L A B D C G F N W M LX□ -23pt40pt . -23pt33pt . -23pt33pt *. -23pt40pt *.⊕ XX B Rℤℂ DℝvwpezxyGradpH^s+1_0(∂ M× [0,T])2̋s H^s+1_0(∂ M× [0,T/2])C^∞_0(∂ M× [0,2r])supp diam distdiag det⟨⟨|⟩εłσøδ̣Γβ̱ατþθκ̨M̃r̃z̃BDRGCRSchröd­ing­er BDRGCR∂ _t0 ∂ _0∂ _n'' ReIm Q^ϕ_,∂⋆⊕ gδ_ε,μδ_μ,εμε*_*_ḍΩωΛ_0ẼB̃MC ^∞^ -14pt ^∘ 10ptfhδå N÷ Div ÷ nsignC ^∞^ -14pt ^∘ 10ptC ^∞^ -14pt ^∘ 10pt D Q UΛ_0Ł LλC_00 E H X F A B D C G W FN LX XX B R vuwpqezxy GradpH^s+1_0(∂× [0,T])2̋s H^s+1_0(∂× [0,T/2])C^∞_0(∂× [0,2r])supp diam distdiag det⟨⟨|⟩ Dετłσøı∞δ̣Γβ̱ατþθκ̨Riemannian manifoldM̃r̃z̃BDRGCR(Riemann)-Schrödinger BDRGCR∂ _t0 ∂ _0∂ _n''Q^ϕ_,∂Rϱg̃∂ M K(Riemann)-Schrödinger Dirichlet-Robin Robin-Dirichlet Λ_0 ^∘ũ f hC ^∞^ -14pt ^∘ 10ptvol vol^aone-dimensionalmultidimensionalboundary spectral dataGromov-Hausdorff B Mdiaminj TReconstruction and stability in Gel'fand's inverse interior spectral problemRoberta Bosi, Yaroslav Kurylev, and Matti Lassas December 10, 2019 =============================================================================τ⊕ AΩ V D T XX B B W D CũH̋σGrad X -23pt40pt . -23pt33pt .C^∞_0(∂ M× [0,2r])supp diam distετ F Nexp exp łσρ̊øı∞δ̣Γexpexp⟨⟨|⟩∂M̃R̃r̃z̃ŭ C⟨⟨|⟩τ∂øεδ̣Re Re Im Im αβ̱ Głλı∞ũG̃f̃Q^ϕ_t, Assume that M is a compact Riemannian manifold of bounded geometry given by restrictions on its diameter,Ricci curvature and injectivity radius. Assume we are given, with some error, the first eigenvalues of the Laplacian Δ_g on M as well as the corresponding eigenfunctions restricted on an open set in M. We then construct a stableapproximation to the manifold (M,g). Namely, we construct a metric space and a Riemannian manifold which differ, in a proper sense, just a little from Mwhen the above data are given with a small error. We give an explicit loglog-typestability estimate on how the constructed manifold and the metric on it depend on the errors in the given data. Moreover a similar stability estimate is derived for the Gel'fand's inverse problem. The proof is based on methods from geometric convergence, a quantitative stability estimate for the unique continuation and a new version of the geometric Boundary Control method. § INTRODUCTION§.§ Inverse interior spectral data and classes of manifoldsLet (M,g,p) be a pointed compact Riemannian manifold, that is, (M,g) is a compact Riemannian manifold without boundary and p∈ Mis a point on M. Let Δ _g be the Laplace operator on (,g),with 0=_0 <_1≤_2≤… being its eigenvalues and φ_j, j=0,1,2,… being the complete sequence of L^2(M)-orthonormal eigenfunctions satisfying -Δ_g φ_j=λ_j φ_j on M. Let (M, g, p) be an n dimensional compact pointed manifold with n≥ 2. Let r_0>0. Then(i) The pair, consisting of the ball(B(p, r_0),g|_B(p,r_0)) on theRiemannian manifold M and the sequence {(_j, φ _j|_B(p,r_0)); j=0, 1,2,…}of eigenvalues and eigenfunctions,is called the interior spectral data (ISD) of (, g, p).(ii) The pair, consisting of the ball(B(p, r_0),g|_B(p, r_0)) and a finite collection{(_j, φ _j|_B(p, r_0)),j=0, 1,2,…,J} of the J+1first eigenvalues and eigenfunctions,is called the finite interior spectral data (FISD) of (, g, p).The interior Gel'fand inverse spectral problem is that of the reconstruction of (M, g) from its ISD. It was solved in <cit.>, <cit.>. In this paper we consider the problemof an approximate reconstruction of (,g) when we know only its FISD, namely,the first eigenvalues, _j < δ^-1 with some small δ∈ (0,1) and the corresponding eigenfunctions of φ_j|_B(p, r_0).Furthermore, we assume that we know all these objects with some error. However, due to the well-known ill-posedness of inverse problems, to achieve thisgoal one needs to assume that the manifold to be approximately reconstructed should lie in a properlychosen class of manifolds.In this paper we concentrate on an appropriate Gromov's class ofpointed manifolds. Next we define a class of manifolds satisfying geometric bounds, in terms of the constantsR, D, i_0, and n, and the radius r_0. Those constants have to be consider as global parameters in all calculations.(Riemannian manifolds of bounded geometry). For any n∈_+ and R>0,D>0, i_0>0, _ n:=_ n(R, D, i_0 ) consists of n-dimensional pointedcompact Riemannian manifolds (,g,p) such thati) ∑_j=0^3 ∇^j Ric(,g)_L^∞(M,g)≤ R,ii)(,g) ≤ D, iii) (,g) ≥ i_0.Here Ric(, g)=Ric^M_jk stands for the Ricci curvature of M,(M, g) for the diameter of M, and (,g)for the injectivity radius of (M,g). At last,∇ stands for the covariant derivative on (M,g).The norm of ∇^j Ric(,g)is computed using the metric g, e.g. ∇Ric^M=(g^ii'g^jj'g^kk'(∇_i Ric^M_jk)(∇_i'Ric^M_j'k'))^1/2. We recall thata pointedcompact Riemannian manifold (,g,p) consists of a manifold M, its Riemannian metric g, and an arbitrary point p∈ M. This definition is used as we specify the point p nearwhich the values of the eigenfunctions are measured. In the future, without loss of generality, we assumer_0 < min(i_0/2, π/2 √(K),1). Here K is the bound for the sectional curvature on _ n.The bound K depends only on R, D, i_0, and n, see (<ref>). This makes it possible to use in B(p, r_0) the Riemannian normal coordinates which allows us to compare interior spectral data of different manifolds in _ n. To formalise the above, let B(r_0) ⊂^n be an Euclidian ball of radius r_0 and h besome Riemanniancoordinates in B(r_0) making it a ball of radius r_0 with respectto h. Letbe a collection of elements(Data Sequences) DS=( (B(r_0),h) , {(μ_j, ψ_j|_B(r_0))}_j=0^∞ )where 0=μ_0 < μ_1 ≤μ_2 ≤…,μ_j →∞, and ψ_j∈L^2(B(r_0), h). (Interior spectral topology.) Let δ>0. For i=1,2, consider the collections DS^i∈.We say that DS^1 and DS^2 are δ-close if the following is valid:There areP∈_+and disjoint intervalsI_p = (a_p,b_p) ⊂ ( -δ, δ^-1+δ),p=0, 1,…, P,such that i) b_p-a_p <δ. ii) For any μ_j^i,i=1,2 with|μ_j^i| < δ^-1 there is p such that μ_j^i ∈I_p.iii)For p=0, n_0^i=1. For any p ≥ 1, the total number n_p^i of elements in sets 𝒥^i_p={j∈_+; μ_j^i∈ I_p} coincide, i.e. n_p^1 = n_p^2), and satisfies n_p^1 = n_p^2≥ 1. iv)There is an orthogonal matrix O ∈ O(n), such that themetrics O_* h_1 and h_2are Lipschitz δ-close on B(r_0), i.e., for any x∈ B(r_0) andξ=(ξ^1, …, ξ^n) ∈^n, ξ≠ 0,we have (1+δ)^-1≤(O_* h_1)_jk(x) ξ^j ξ^k/(h_2)_jk(x) ξ^j ξ^k≤ 1+δ,v) For any p there is a unitary matrixA_p =[a^(p)_jk]_j,k∈𝒥_p∈ U(n_p),such that A_p (O_*Ψ_p^1)- Ψ_p^2_(L^2(B(r_0),h_2))^n_p≤δ, A_p^-1( (O^-1)_* Ψ_p^2)- Ψ_p^1_(L^2(B(r_0),h_1 ))^n_p≤δ.Here,Ψ_p^i is the vector-function {ψ_j}_j ∈𝒥^i_p.Note that above the number P indicateshow many groups of eigenvalues are clustered to satisfy conditions i-iv. Moreover, for two sequences DS^1 and DS^2, the above conditions i-iv may be valid with severaldifferent values of P and intervalsI_p,p=1,2,…, P. Condition v) can be interpreted as the closedness of the Riesz projectors corresponding to Δ_g_i onto I_p.We note that in a more restricted context of Gelfand's inverse problem for a Schrödinger operator with simple spectrum in a domain in ^na similar topology was introducedin <cit.>.§.§ The main results To formulate our result on an approximate reconstruction, we use theGromov-Hausdorff distance. (GH-topology, see e.g. <cit.>, <cit.>). Let (X^i, d^i, p^i), i=1,2 be pointed compact metric spaces. Then thepointed Gromov-Hausdorff distance d_GH(X^1, X^2) is the infimum of all >0such that there is a metric space (Z,d_Z) and isometric embeddings i_1:X^1→ Z and i_2:X^2→ Z which satisfyd_H( i_1(X^1),i_2(X^2))<, d_Z( i_1(p^1),i_2(p^2))<.Here d_H denotes the Hausdorff distance in Z, see <cit.>. The main result of the paper is: Let n≥ 2, R, D, i_0 and r_0 satisfying (<ref>) be given. Then thereexist>1 and >0, depending only on n, R, D, i_0 and r_0, such that the following is true: Let (M^(1),g^(1), p^(1)),(M^(2),g^(2), p^(2)) ∈ M_n.Assume thatthe interior spectral data of the operators -Δ_g^(i)on M^(i) in the balls B^(i)=B_M^(i)(p^(i),r_0)⊂ M^(i), that is, the collections ((B^(i), g^(i)), {(λ_j^(i), φ^(i)_j|_B^(i)) ; j=0,1,2,…})are δ-close, in the sense of Definition <ref>, with 0<δ≤exp(-e).Thend_GH((M^(1),p^(1)),(M^(2),p^(2)))≤(ln(ln1/δ))^-.The above stability estimate is log-logtype. It is not known if this type of result is optimal, but the counterexamples of Mandache <cit.> for equivalent inverse problem show that the stability result can not be better than logarithmic. The proof of Theorem <ref> is constructive, andis based on the followingresult on the reconstruction of the manifold from the data.Below, when we state that a manifold (M^*,g^*) can be constructed from the data,we meanthat there is a sequence of steps, where we solve a finite number of quadratic minimization problems infinite dimensional spaces, choose elements from finite sets or compute certain explicit functions.Indeed, we do the following steps. First, we solve quadratic minimization problems in finite dimensional vector spaces(that are equivalent to solving linear equations) to find the finite sequences(d^a_j(,i))_j=0^∈^, where (,i)run over a finite index set, see Theorem <ref>. Second, we use these sequences to compute approximative volumes^a(M^*_ (i)(β)), of subsets of M, where (i,β) runs over a finite index set, see Lemma <ref>. Third, we choose the set of admissible indexes β for which the approximative volumes are larger than a certain threshold value, see Definition <ref>. The admissible indexes are used in Section <ref> and Lemma <ref> to define a finite setof piecewise constant functions, R^*_M, thatapproximate the collection of the interior distance functions.Using the finite set R^*_Mand a modified version of the construction given in <cit.> we construct a finite metric space (M^*,d^*), that approximates the Riemannian manifold (M,_g)in Gromov-Hausdorff sense. Let n≥ 2, R, D, i_0 and r_0 satisfying (<ref>) be given.Then there exists a constant δ^* = δ^*(n, R, D, i_0, r_0)andpositive constants <1 and >1, dependingonly onn, R, D, i_0 and r_0, such that, for all δ with0<δ≤δ^*,the following is true: Assume that (M,g, p) ∈ M_ n and we are given a collection ((B(r_0), g^a), {(μ_j, φ^a_j|_B(r_0)) ; j=0,1,2,…,J})thatis δ-close, in the sense of Definition <ref>,to interior spectral data of the operator -Δ_gon M.Using the data (<ref>) we can constructa pointed metric space(M^*, d^*,p^*) such thatd_GH(M, M^*)≤, where=(ln(ln1/δ))^- .We note that in Proposition <ref> that value of Jis not fixed, but it just has to be so large that every j, for which the eigenvalue λ_jsatisfies λ_j<δ^-1+δ, fulfils the inequality j≤ J, see (<ref>).The relation J,i.e., the number of eigenvalues, and the accuracy parameterδ is discussed in Remark <ref> below.A note on the used constants. In the main part of thepaper, we will make frequent use of constantsc, C, C_1, C_2, etc. These constants will depend only on the geometric bounds n, R, D, i_0, r_0, see Definition <ref>, but may change in their value from line to line. The constants that depend only on the geometric boundsn, R, D, i_0, r_0 will be called `uniform constants'.When we define a constant for the first time, we specify whether it is uniform or not and write its further dependencies in parenthesis. For example the constant C(s,m) (or C_s,m)depends also on s and m.Before Appendix we have collected atable on the locations where theconstants C_k and c_k are defined. Conventions for constants in the Appendix are explainedin each subsection.Also M_ n is the closure of M_ n in the GH topology. Parameter K above and in Corollary<ref> below is the bound for the sectional curvature which is uniform, see (<ref>), on M_ n. As shown in section 2.1 the class M_ n is compact. Thus, when checkingcondition v) of definition <ref> it is sufficient to use the standard L^2- norm on B(r_0).Recall that the forpointedC^1-diffeomorphic manifolds (M_1,p_1) and (M_2,p_2)the Lipschitz distanceisd_L((M_1,p_1),(M_2,p_2))=inf_F:M_1→ M_2( ln(Lip(F))+ln(Lip(F^-1))++d_M_2(p_2,F(p_1))+d_M_1(p_1,F^-1(p_2)))where the infimum is taken overbi-Lipschitz maps F:M_1→ M_2 and Lip(F) is the Lipschitz-constant of the map F, see <cit.>. Inequality (<ref>) combined with the sectional curvature bound (<ref>) and the solution of the geometric Whitney problem <cit.> implies the following stable construction result for the manifold M in the Lipschitz topology. Let (M,g,p)∈ M_ n, δ>0, and the metric space M^* be as in Proposition <ref>.Using M^* one canconstruct a smooth pointed Riemannian manifold (N,g_N,p_N) such that |Sec(N)|≤ K, inj(N)≥min{ ( K)^-1/2, (1- K^1/3σ_0^2/3) i_0}, and M and N are diffeomorphic. Moreover,d_L( (M,p),(N,p_N))≤ K^1/3σ_0^2/3, σ_0=(ln(ln1/δ))^-,Here Sec stands for the sectional curvature and , are uniform constants.Instead of eigenvalues and eigenfunctionsone can deal with the heat kernels H_M(x, y, t) of ∂_t-Δ_g, cf. <cit.>, <cit.>, <cit.>. Definition <ref> can be reformulated e.g. as H_M^(1)-H_M^(2)_C(B(p, r_0)^2 × (δ, ∞)) < δ. An analog ofTheorem <ref> can be obtained. However, we donot dwell on this issue in thepaper. To complete this section we recall that stability in the corresponding direct spectral problem is well-known, see e.g. <cit.>.In particular, let M be a compact manifold equipped with metrics g_ℓ, ℓ=1, 2, …, and g_0. Leta,b∉σ(-Δ_g_0). Denote by P_ℓ,P_0 the spectral orthoprojectors in L^2(M, g_ℓ), L^2(M, g_0) on the interval [a, b]. Then it follows from TheoremsIV.3.16 and VI.5.12 of <cit.> that if g_ℓ-g_0_L^∞(M)→ 0 as ℓ→∞,then P_ℓ-P_0_L^2(M,g_0)→ L^2(M,g_0)→ 0. This implies that the ISD of (M,g_ℓ) converges to the ISD of (M,g_0).§.§ Earlier results and outline of the paper The Gel'fand inverse problem, formulated by I. M. Gel'fand in 50s <cit.>, is the problem of determiningthe coefficients of a second order elliptic differential operator in a domain Ω⊂^n from the boundary spectral data, that is, the eigenvalues and the boundary values of the eigenfunction of the operator. In the geometric Gel'fand inverse problem, a Riemannian manifold with boundary and a metric tensor on it needto be constructed from similar data.For Neumann boundary value problem for the operator -Δ_g on manifold M, the boundary spectral data consists of the boundaryM, the eigenvalues λ_j and the boundary values of theeigenfunction,φ_j|_ M, j=1,2,… The uniqueness of the solution of the Gel'fand inverse problem has been considered in <cit.>. To consider the formulation of the stability of the inverse problems,let us consider first the Gel'fand inverse on a bounded domainΩ⊂^2 with smooth boundary Ω and a conformally Euclidian metric g_jk(x)=ρ(x)^-2δ_jk. Here, ρ(x)>0 is a smooth real valued function. Then the problem has the form-∑_k=1^2ρ(x)(/ x^k)^2φ_j(x)-λ_jφ_j(x)=0,in Ω, ∂_νφ_j|_Ω=0.The problem of determining ρ(x) from the boundary spectral data is ill-posedin sense of Hadamard: The map from the boundary data to the coefficient ρ(x) is not continuous so that small change in the data can lead to huge errors in thereconstructed function ρ(x). One way out of this fundamental difficultyis to assume a priori higher regularity of coefficients, that is a widely used trend in inverse problems for isotropic equations, like (<ref>). This type of results is called conditional stability results (see e.g. <cit.>).For inverse problems for general metricthis approach bears significant difficulties. The reason is that the usual C^k norm bounds of coefficients are not invariant and thus this condition does not suitthe invariance of the problemwith respect todiffeomorphisms. Moreover, ifthe structure of the manifold M is not known a priori, the traditional approach can not be used. The way to overcome these difficulties is to impose a priori constraints in an invariant form and consider a class of manifolds that satisfy invariant a priori bounds,for instance forcurvature, second fundamental form, radii of injectivity, etc. Under such kind of conditions,invariant stability results for various inverse problemshave been proven in <cit.>. In particular, for the Gel'fand inverse problem for manifolds with non-trivial topology, an abstract, i.e., a non-quantitativestabilityresult was proven in <cit.>. There, it was shown that the convergence of the boundary spectral data implies the convergence of the manifolds with respect to the Gromov-Hausdorff convergence. However, this result was based on compactness arguments and it did not provide anyestimate . In this paper our aim is to improvethis result and to give explicit estimates for an analogous inverse problem. In this paper we consider a Gel'fand inverse problem for manifolds without boundary. Then, as explained above, instead of assuming that the boundary and the boundary values ofthe eigenfunctions are known we assume that we are given a small open ball B⊂ M and the eigenfunctions φ_j are known on this set. Similar type of formulation of the problem with measurements on open sets have been considered in <cit.>. We show that the Interior Spectral Data (ISD), that is, an open set B⊂ M, the eigenvalues λ_jand the restrictions of the eigenfunctionsφ_j|_Bdetermine the whole manifold (M,g) in stable way. Also, we quantify this stability by giving explicit inequalities under a priori assumptions on the geometry of M.We emphasise that we assume that the eigenfunctions are known only on an open subset B of M that maybe chosento be arbitrarily small but still e.g. the topology of M is determined in a stable way. We note that this paper is a slightly extended and polished version of our preprint in Arxiv, published on Feb. 25, 2017. We note that in spectral geometry one has studied similar stability problems where the heat kernel are known on the wholemanifold, <cit.>. This data is equivalent to knowing the eigenvalues and the eigenfunctions and the eigenfunctions on the whole manifold. Outline of the paper: Ch. 2 introduces the geometric set-up. Ch. 3 formulates the stability of the unique continuation for the solution of the wave equation together with Corollary <ref>for its spatial projection v. Ch. 4 presents Thorems <ref> and <ref> proving the construction of the approximate Fourier coefficients of χ_Ωv in the case of respectively exact and approximate FISD.Ch. 5 shows the related approximate interior distance functions. Ch. 6 collects all the previous inequalities to prove Theorem <ref> and Proposition <ref>.§ GEOMETRIC PRELIMINARIES§.§ Properties of the manifolds of bounded geometry Here we list some results on the class _ n(R, D, i_0),These results can be found in or immediately follow from <cit.> with further improvements in <cit.>. Namely, the class _ n is precompact in GH-topology. Its closure, _ n consists of pointed Riemannian manifolds (, g,p) with g ∈ C^5_*(M) which satisfy (<ref>). Here and later ^* indicates the Zygmund space. We define the norm of the space C^k(M) invariantly by f_C^k(M):=∑_j=0^kmax_x∈ M∇^j f(x)_g,wherethe norm is computed using the metric g. Next, for k ∈_+,β∈ (0, 1], we use the the Zygmund spaces C^k+β_*(M)=[C^k_1(M),C^k_2(M)]_θ, k+β=θ k_1+(1-θ) k_2∈_+, θ∈ (0,1).Here [·, ·]_θ stands for the interpolation, see e.g <cit.>.Note that, for β∈ (0, 1), the Hölder spaces fulfillC^k,β(M)=C^k+β_*(M). To achieve the C^k_*-smoothness of g, one needs some special coordinates, e.g. harmonic coordinates. For anynumber Q>1, that we below choose to be Q=2, there is a constant r^(har) depending only on n, R, D, i_0, r_0 and Q, such that, for any (M,g,p)∈_n,q ∈ M, there are Q-harmonic coordinates in B(q,r^(har)), that we denote by Y:B(q,r^(har))→^n, U_q=Y(B(q,r^(har))) that we denote by y.For Q=2, in these coordinatesthe metric tensor g^(har)_jk(x)=(Y^-1)^gsatisfies2^-1I ≤ (g^(har)_jk(y))_j,k=1^n ≤ 2 I,for y∈ U_q=Y(B(q,r^(har))) g^(har)_jk_C^5_*(U_q)≤ C^(har), with some uniform constant C^(har), see <cit.> and <cit.>. We note that the existence of the harmonic radius r^(har) and the constant C^(har) for which (<ref>) holds for all (M,g,p)∈_n,q ∈ M, is based on compactness results, and therefore the dependency of r^(har) and C^(har) on n, R, D, i_0, r_0 is not explicit.Sometimes, with a slight abuse of notation we identify y∈ U_q with the corresponding point in Y(y)∈ B(q,r^(har)).The inequality(<ref>) immediately implies thatthe sectional curvature Sec and the Riemannian curvature tensor R_M satisfies |Sec(M)|≤ K,R_M≤ K,∇ R_M≤ K,whereKis a uniform constant. For the sake of simplicity, we will work with Hölder rather then Zygmund spaces. It follows from <cit.>, with the terminology described in <cit.>,thatwhen (M_k,g_k,p_k) → (M,g,p) in the GH topology on _ n, then, for all β∈ (0,1), there are C^5,β-smooth diffeomorphism F_k:M_k→ M such that F_*(g_k) → g in C^4, β(M), as k→∞.Thus,for any >0, β<1, there is σ=σ(, β) such that we have the following: For all M_1,M_2∈_ n such that d_GH(M^1, M^2) <σ, there is a diffeomorphism F: M^1 → M^2 and g^h_1-F_*(g^h_2)_C^4, β(M^i) <, i=1, 2, cf.<cit.>. Returning to (<ref>), for large k, M_k and M are diffeomorphic,so that it is possible to use results from <cit.>, see the end of sec. 1.2. This implies stability of thedirect problem in the GH topology on _ n.Note that we can solve the ordinary differential equations that define the geodesicsin the harmonic coordinates. Then it follows from (<ref>) that there is a uniform constant >1, such that forany ball B(x,r)⊂ M,where (M,g,p) ∈_ n, we have1/ r^n ≤vol(B(x, r)) ≤ r^n,0 ≤ r ≤ D. Thus,the volume of balls having radius i_0/2 is bounded below by a uniform constant v_0.Furthermore, by <cit.>, the class of Riemannian manifolds (M,g) that satisfy (<ref>)and conditions (,g) ≤ Dand (,g) ≥ v_0 are pre-compact with respect tothe Lipschitz distance d_L((M_1,p_1),(M_2,p_2)), see (<ref>) and the closure of this class consists of C^∞-smooth manifolds with C^1,α-metric. This implies that there is a uniform constant C^(Lip) such that for all(M_1,g_1,p_1), (M_2,g_2,p_2)∈_n we haved_L((M_1,p_1),(M_2,p_2))≤ C^(Lip).Moreover, by <cit.>, we have that for any >0 there isζ()>0,such that for (M_1,g_1,p_1),(M_2,g_2,p_2)∈_n we have if d_GH((M_1,p_1),(M_2,p_2))<ζ() then d_L((M_1,p_1),(M_2,p_2))<. We turn now to the spectral properties on (M,g,p)∈_ n.By <cit.>, the inequality (<ref>) implies that the j-th eigenvalue λ_j(M_i,g_i) of the Laplacian on the manifold (M_i,g_I) satisfies e^-(n+2)C^(Lip)/2λ_j(M_1,g_1) ≤λ_j(M_2,g_2)≤ e^(n+2)C^(Lip)/2λ_j(M_1,g_1)for all(M_1,g_1,p_1), (M_2,g_2,p_2)∈_n.Sincethe eigenvalues of the manifold(M_1,g_1)satisfy the Weyl's asymptotics_j(M_1)= c__M_1 j^2/n(1+o(1)) as j→∞, then there existsa uniform constant >1 such that, 1/ j^2/n≤_j(M) ≤ j^2/n,j∈_+for all(M,g,p)∈_ n(R, D, i_0). Notethat (<ref>) is valid under aweaker assumption that Ric(M) is bounded from below, see <cit.>. Assume that the collection of g^a|_B_e(r_0) and ((λ_j^a,φ_j^a |_B_e( r_0)))_j=0^J is δ-close to the FISD g|_B_e(r_0) and (λ_j,φ_j |_B_e( r_0))_j=0^J of the manifold (M,g,p)∈_ n. Then all intervals I_p= (a_p,b_p), p=0,1,…,P in (<ref>) satisfy b_j≤δ^-1+δ, and thus the index j of any eigenvalue λ_j that is in some of these intervals satisfies by (<ref>) the inequality ^-1 j^2/n≤δ^-1+δ≤2δ^-1. On the other hand, if j < ( ^-1δ^-1)^n/2, then λ_j<δ^-1.Thus, without loss of generality,we can always assume that thevalue ofJ inProposition <ref>satisfies( ^-1δ^-1)^n/2≤ J≤ (2δ^-1)^n/2.Below we will assume that δ < (3)^-1. Then for j≥ 1 we have λ_j≥^-1 andλ_j>3δ. Next, assume that λ_jand λ_k with k>j≥ 1 belong in the same interval I_p= (a_p,b_p) with b_p-a_b<δ. Since λ_j≥^-1 >3δ, we have a_p>2δ so that b_p<2a_p. Then by (<ref>) we have^-1 k^2/n≤λ_k≤b_p≤ 2a_p≤ 2λ_j≤ 2 j^2/n,implyingj< k≤ (2^1/2)^n j.Next, instead of harmonic coordinates, we can use coordinates made of the eigenfunctionsφ_j. It turns out, cf. <cit.>, that in a neighbourhood of any x ∈ Mthere are φ_j(1; x), …, φ_j(n; x) which form C^6_*-smooth coordinates.Moreover, by the compactness arguments, there are uniformconstants r and C so that these coordinates are well defined in any ball B(x, r)⊂ M, where (M,g,p) ∈_ n, and the metric tensor g in these coordinates satisfies (<ref>). There is also a uniform number N ∈_+, such that we can takej(ℓ; x) ≤ N, ℓ=1, …, n.Next, using((_j, φ_j))_j=0^∞, we introduce the Sobolev spaces H^s(M), s ∈,f(x) =∑_j=0^∞ f_j φ_j(x) ∈ H^s(M)ifff^2_H^s:=∑_j=0^∞⟨|_j ^s |f_j|^2 <∞,where ⟨| = (1+^2)^1/2. §.§ Distance coordinatesRecall that there are harmonic coordinates in B(x, r^(har)) ball near any x ∈ M∈_ n,see (<ref>). In the Proposition below we use such coordinates as background coordinates near x. Below, we say that a subset Y⊂ X is a τ-net inthe metric space X iftheunion of the balls B_X(y,τ), y∈ Y, contains the whole space X. Also, we say that Z⊂ X is τ-separated, if for all z_1,z_2∈ Z, z_1≠z_2we have d_X(z_1,z_2)≥τ. Observe that if Z⊂ Xis a maximal τ-separated subset of X (maximal in the sense that any otherτ-separated subset of X that contains Zhas to be equal to Z) , then it is a τ-net in X.There are uniformconstantsτ_0,ρ_0<min{ r^(har)/4, r_0/128} and uniform constants L∈_+and, , >0 depending only on n, R, D, i_0 and r_0, such that, for any(M,g,p)∈_ n(R, D, i_0) the following holds true:There is a τ_0-net in B(p, r_0/4) with at most L-1 points.Let{z_1, …, z_L-1}⊂B(p, r_0/4)bean arbitrary collection of points that is aτ_0-net in B(p, r_0/4). Then, (i) For all x ∈ M, there are npoints z_j(i)∈ Z,j(i)=j(i; x), i=1,2,…,nsuch that the map X:B(x,ρ_0)→^n,X:y =(y^1, …, y^n)↦ (d_M(y, z_j(1)),d_M(y, z_j(2)), …, d_M(y, z_j(n))),for coordinates where X:B(x,ρ_0)→ X(B(x,ρ_0)) is a Lipschitz-smooth diffeomorphism andDX_L^∞(B(x,ρ_0))+ DX^-1_L^∞(X(B(x,ρ_0))))≤. wherethe norms are computed using the metric g on M and the Euclidean norm in ^n.Moreover, z_j(i) can be chosen so that d(x, z_j(i)) > r_0/ 16 and the metric tensor (g_ij)_i,j=1^n=X_*g in these coordinates satisfies ^-1 I ≤ (g_ij(z))_i,j=1^n≤ I, z∈ X(B(x, ρ_0)). (ii) The map H:M→^L-1, defined by H^L(x)=(d_M(x,z_j))_j=1^L-1, satisfies 1/≤|H^L(x)-H^L(y)|/d(x,y)≤,for all x,y∈ M, x≠y. Here as a norm in (<ref>) we can take e.g. the Euclidian norm in ^L-1.Proof.Let us first consider one pointed manifold (M,g,p)∈_ n.Let us considerthe extended exponential map F:TM→ M× M, F(x,ξ)= (x,exp_x(ξ)). Inequalities(<ref>)and (<ref>) imply that in the set S={(x,ξ)∈TM: ξ≤ 2D} the map F is C^2-smooth and its normin C^2(S) is bounded by a uniform constant. The proof this is analogous to that of Lemma 2 in<cit.>.Let x_0 ∈ M and γ_x_0,ξ_0([0,s_0]), ξ_0∈ S_x_0Mbe a shortest geodesic from x_0 to p where s_0=d(x_0,p). When s_0≥ r_0/2, choose s_1=s_0-r_0/5, and whens_0<r_0/2,choose s_1=s_0+r_0/5. Then thepoint p_1=γ_x_0,ξ_0(s_1) satisfiesp_1 ∈ B(p,r_0/5), and d(x_0, p_1) ≥ r_0/5. As r_0<i_0, we see that the geodesic γ_x_0,ξ_0([0,s_1+ r_0/5]) is a length minimisinggeodesicbetween its endpoints. In particular, this implies thatγ_x_0,ξ_0([0,s_1])continues behind p_1 as a shortest curve betweenits points. As in <cit.>, (see also <cit.> where related results are proven with lower regularity assumptions), we see that (<ref>)and (<ref>) imply that there is a uniform constant r_*∈ (0,r_0/100) such that following is true.Let 𝔹=B_TM((x_0,s_1ξ_0),r_*) be the ball of radius r_* and center (x_0,s_1ξ_0) defined in the tangent bundle (TM,g) using the Sasaki metric. Then for vectors (x,sξ)∈𝔹, whereξ∈ S_xM, s>0, the geodesics γ_x,ξ([0,s])are length minimizing curves between their end points. Moreover,the exponential map in 𝔹, that is, F:𝔹→ F(𝔹) is a diffeomorphism and satisfies dF≤ C in 𝔹, and(dF)^-1≤ C inF(𝔹),and F(𝔹) ⊂ M× B(p_1, r_0/100).Here, M× B(p_1, r_0/100)⊂ M× B(p, r_0/4).In particular, d(x,z)=|F^-1(x,z)| for (x,z)∈ F(𝔹). Let ξ_j∈ S_x_0M and t_j>0, j=1,2,…,n be such that ξ_j-ξ_0<r_*/s_1,ξ_j-ξ_k>r_*/(8s_1) for j≠k, and|t_j-s_1|<r_*/8.Then s_1ξ_j∈𝔹.Let z_j=exp_x_0(t_jξ_j)∈ B(p, r_0/4).We see that∇ d_M( ,z_j)|_x_0=-ξ_jand(dF|_x_0)^-1≤ C. Inverse function theorem, see e.g. <cit.>, and the facts that (dF|_x_0)^-1≤ C and that Fhas a uniformly bounded C^2-norm in S,imply for the map H^M_z_1,…,z_n(x)=(d_M(x,z_j))_j=1^n that there are uniform constants ρ_*>0 and c_*>0 such that we have|H^M_z_1,…,z_n(x)-H^M_z_1,…,z_n(x')|≥ c_*d_M(x,x'),for allx,x'∈ B_M(x_0,ρ_*).Let us now choose ξ^0_j∈ S_x_0M, j=1,…,n such that s_1ξ_j^0∈𝔹 satisfyξ^0_j-ξ_0<c_*/(2s_1),ξ^0_j-ξ^0_k>c_*/(4s_1)for j≠k. Letz_j^0=exp_x_0(s_1ξ^0_j). AsdF^-1≤ C in F(𝔹), there is a uniform constantτ_*∈ (0,r_0/100)such that if z̃_j∈ B(p, r_0/4), j=1,…,n satisfy d(z̃_j,z_j^0)<τ_* then there are ξ̃_j∈ S_x_0M and t̃_j>0 such that z̃_j=exp_x_0(t̃_jξ̃_j) and |ξ̃_j-ξ_j|<r_*/(8s_1)and |t̃_j-s_1|<r_*/8.Then ξ̃_j and t̃_jsatisfy (<ref>).Thus,(<ref>) implies that the map H^M_z̃_1,…,z̃_n(x)satisfies (<ref>).This implies that if {ẑ_i∈ B_M(p, r_0/4), i=1,2,…,i_M} is any τ_*-net in B_M(p, r_0/4) then for all j=1,2,…,n there are i_j∈{1,2,…,i_M} such that d_M(ẑ_i_j,z^0_j)≤τ_*. Then the above implies that for H^M_ẑ_i_1,…,ẑ_i_n(x)satisfies (<ref>).Observe that above (x,sξ)∈𝔹, so thatd_M(x,x_0)<r_*<r_0/100. Moreover, d(x_0, p_1) ≥ r_0/5,z_j^0∈ B(p_1, r_0/100), d_M(ẑ_i_j,z^0_j)≤τ_* <r_0/100 yield d_M(x,ẑ_i_j)≥r_0/5-3r_0/100>r_0/8. Note that above c_*, τ_* and ρ_* are uniformconstants and the estimate (<ref>) is valid for some pointsẑ_i_j in any τ_*-net ẑ_i in B(p, r_0/4), that satisfy d_M(x,ẑ_i_j) >r_0/8, and any(M,g,p)∈_ n.This proves (<ref>) and (<ref>) in claim (i).Next we consider the claim (ii). Let us show that there areh_1>0 and τ_1>0 such that for any(M,g,p)∈_ nand any maximal τ_1-separeted set {z_1,…,z_L-1}⊂ B(g,r_0/4) we havesup_x,y∈ M, x≠y(sup_j_1,…,j_n|(d_M(x,z_j_i))_j=1^n-(d_M(y,z_j_i))_j=1^n|_^n/d_M(x,y))≥ h_1, where the supremum is taken over all 1≤ j_1<j_2<…<j_n≤ L-1.Assume the opposite. Then for all k∈_+ there are h_k>0, (M_k,g_k,p_k)∈_ n and 1/k-nets {z_j^k: j=1,2,…,L_k}⊂ B(p_k,r_0/4) and points x_k,y_k∈ M_k, x_k ≠ y_k so that h_k→ 0 and sup_x,y∈ M_k, x≠y(sup_j_1,…,j_n|(d_M_k(x_k,z^k_j_i))_i=1^n-(d_M_k(y_k,z^k_j_i))_i=1^n|_^n/d_M_k(x_k,y_k))<h_k.Using compactness arguments for _ n and choosinga suitable subsequence of the manifolds (M_k,g_k,p_k) we can assume that(M_k,g_k,p_k) → (M,g,p) in theLipschitz-topology. Thenthere are diffemorphismsF_k:M_k → M such that F_k(p_k)→ p and Lip(F_k)→ 1 and Lip(F_k^-1)→ 1. Moreover,we can assume that F_k(x_k)→ x and F_k(y_k) → yin M and, after using the Cantor diagonalization procedure, we can assume that there are limits lim_k→∞ F_k(z_j^k)=z_j in M, for allj=1,2,….Next,using (<ref>), we see that d_M_k(x_k,y_k) → d_M(x, y), d_M_k(x_k, z_j^k) → d_M(x, z_j) and d_M_k(y_k, z_j^k) → d_M(y, z_j). Also {z_j}_j=1^∞ is dense in B_M(p, r_0/4). Therefore, d_M(x, z)=d_m(y, z) for all z ∈ B_M(p, r_0/4). Then <cit.> (see also <cit.>),implies that x=y. Let kbe so large that 1/k<τ_*/2, and d_M(F_k(p_k),p)<τ_*/2,Lip(F_k)≤ 2, Lip (F_k^-1)≤ 2, d_M(F_k(x_k), x)<ρ_*/4, d_M(F_k(y_k), y)<ρ_*/4, h_k<c_*. As x=y, these imply d_M(F_k(x_k), F_k(y_k))<ρ_*/2 and henced_M_k(x_k,y_k)<ρ_*. As 1/k<τ_*/2,the points z^k_j,j=1,…,L-1 form a τ_*-net in B_M_k(p_k,r_0/4). Then the inequality (<ref>) for x_k and y_k,with h_k<c_*, is in contradiction withthe fact that there is a subset of n of the points inτ_*-netz^k_j,j=1,…,L_k for which(<ref>) holds. This proves (<ref>) with some uniform constants τ_1and h_1. We observe that a maximal τ_1-separated subset inthe ball B(p,r_0/4) has at most C_*=(B(p,r_0/4))/_n,R(B(x,τ_1)) points, where _n,R(B(x,τ_1))is the volume of the ball of radius τ_1 on the n-dimensional sphere havingconstant curvature R. Hencewe see that the numberofpoints for a maximal τ_1-separated subset in B(p,r_0/4) is bounded by a uniform constantC_*. Thus we can choose L to be the integer part of C_* and τ_0=min(τ_*,τ_1), which makes L and τ_0 uniform constants. As the number L-1of points in theτ_1-nets we consider is bounded by a uniform constant, we see that (<ref>) is valid with =h_1^-1+L. These prove the claims (i) and (ii). The above considerations bring about the following result. There exist uniform constant >0 and uniform constant N_F∈ℤ_+ (that is,and the integer N_F depend only onn, R, D, i_0, r_0) such that (i)Let σ∈ (0,τ_0]. Thenany maximalσ-separated setx_1, …, x_N(σ) in Mis such that the number of its elements fulfills the boundN(σ) ≤Ñ(σ)= σ^-n. Moreover,the balls B(x_k, 4 σ)satisfy the finite intersection property with at mostN_F intersections, that is, any point x∈ Mbelongs to at most N_F balls B(x_k, 4 σ).(ii) Letσ∈ (0,τ_0]. Thenany maximalσ-separated set z_1, …, z_N_1(σ)in B(p, r_0/4)is such that the number of its elements fulfills the boundN_1(σ) ≤Ñ(σ), and the ballsB(z_k, 4 σ) satisfy the finite intersection property with at mostN_F intersections. Proof.It remains to prove the finite intersection property. It follows from (<ref>) if we take into the account that B(x_k, 4σ) ∩ B(x_j, 4σ)=∅ if d(x_k, x_j) ≥ 9 σ and B(x_k, σ/2) ∩ B(x_j, σ/2) =∅. § WAVE EQUATION: STABILITY FOR THE UNIQUE CONTINUATION Consider the initial-value problem for thewave equation_t^2 w -Δ_g w=0 inM×, w|_t=0=v,w_t|_t=0=0,on (M,g,p)∈_ n(R, D, i_0) and denote its solution by w=W(v). Our main interest lies in the case when v ∈^̋s_Λ(M), Λ >0, ^̋s_Λ(M)={v∈ H^s(M): v_H^s(M)≤Λ} and we assume in the following that 3/2 < s <2and denote ^̋0_Λ(M):={v∈ L^2(M): v_L^2(M)≤Λ}. Using the Fourier decomposition we show that, if v ∈ H^s(M), then w_H^s(M× [-T,T])≤ 6 √(T)v_H^s(M)≤v_H^s(M),T <2 D,where= 6 √(1+D^2). Associated to the wave operator are the double cones of influence. To define these, let V⊂ M be open, T ∈_+. Denote byΓ(V, T):=V× (-T,T).Then the double cone of influence is given byΣ(V, T):={(x,t); d(x,V)+|t| < T}.By Tataru's uniqueness theorem <cit.>, <cit.>, if u is a solution to(<ref>)in M × (-T, T), which satisfies u=0 in Γ(V, T), then u=0 in Σ(V,T). However, for our purposes we need an explicit estimate which follows from Theorem 3.3 in <cit.>.To formulate the results we introduce, for0< ≤r_0/32, r_0/8≤ T <2D, with r_0 fulfilling (<ref>) and z∈ M, thedomainsΓ=Γ(z, T)=B(z,r_0/16) × (-T+r_0/16, T-r_0/16), = (z,γ,T)={(x,t):(T-d(x,z))^2-t^2 ≥γ^2,|t| < T-r_0/16},Ω(T)= M × (-T+r_0/16, T-r_0/16).Also, let for b∈, Σ(z,bγ,T) = {(x,t)∈ M×: |t|≤ T-r_0/16 , |t| ≤T-bγ - d_g(x,z)}be the “domain of influence” corresponding to the cylinder Γ(z, T).Observe that Σ(z,γ,T) ⊂(z,γ,T)⊂Σ(z,0,T). In the following we formulate the stability results for the unique continuation in <cit.>. We notethat similar results have been obtained by Luc Robbiano in <cit.> with θ=1, but with alossin the domain of dependence and later byC. Laurent and M. Leautardin <cit.> with θ=1, but without an explicit calculationof the constants in thedomain of dependence. Let (M,g,p) ∈_ n(R, D, i_0). Let P=P(x,D)=^2_t-Δ_gbe the wave operator associated withM. Assume that w(x,t)=0 for all (x,t)∈Γ. Then, for any 0 <θ<1, there is c_206(,θ)≥ 1,depending only on n, R, D, i_0,r_0,θ and γsuch that the following stability estimate holds true:w_L^2((z, , T))≤c_206(,θ)w_H^1(Ω(T))/(ln(1 + w_H^1(Ω(T))/Pw_L^2(Ω(T))))^θ ,where c_206(,θ) is such that c_206(,θ) = c_205 ( θ)exp( ^-c_200), c_200=58(n+1)+1,andc_205( θ)≥ 1 depends on θ, n, R, D, i_0, r_0. Moreover, for any 0 ≤ m ≤ 1,w_H^1-m((z, ,T))≤c_206(,θ)^mw_H^1((Ω(T))/(ln(1 + w_H^1((Ω(T))/Pw_L^2((Ω(T))))^θ m .Proof. Theorem <ref> follows from Theorem 3.3in <cit.> with ℓ=r_0/16 and (z,γ,T) = S(z, r_0/16, T, ). Using that w=0 in Γ, the domain Λ in the final equation of Theorem 3.3 can be changed into (z, ,T). Moreover, for θ <1, the function f_θ(a, b), a, b>0, f_θ(a,b)= a (ln(1+a/b))^-θ,increases when either a or b increases. Thus, we can change w_H^1(Ω_1) and P w_L^2(Ω_1) in Theorem 3.3 to w_H^1(Ω(T)) and P w_L^2(Ω(T)). Note that, although the results in <cit.> are formulated for M ⊂^n,they can be easily reformulated for an arbitrary compact Riemannian manifold which possess C^5-smooth covering by coordinate systems with C^4-smooth metric tensors. To consider parameters(<ref>)(see the Appendix for details), we will fix the value of θto beθ = 1/2, for simplicity. In the general case, we write c_205 ( θ) as θ-dependent.We recall that the constants in <cit.> (see (3.1)) explicitly depend on parameter >1 such that ^-1|ξ|^2 ≤ g^jk(x)ξ_jξ_k≤|ξ|^2, g^jk(x)_C^4(M)≤. Using harmonic coordinates in balls of radius r^(har), this condition is fulfilleddue to (<ref>), which also implies d_g(x,z) ∈ C^3. Our main interest will be an estimate for v(·)=w(0, ·) in (<ref>) in the domain B(z, T-2).Assume (<ref>) and letθ∈ [1/2,1). Also, letΛ_1>0 and_2 ∈ (0,Λ_1] and v ∈^̋s_Λ_1(M). Denote by w=W(v)the solution to initial-value problem (<ref>) and assume that, w_L^2(B(z, r_0/16+) × (-T +r_0/16,T-r_0/16))≤_2.Then, calling β=θ^2/2 and defining _1 :=ℰ_1(_2; θ, γ,Λ_1), we get v_L^2(B(z,T-2)) ≤ _1where, for c_202=c_202(θ, ),and (θ) depending only on θand n,R,D,i_0,r_0ℰ_1(_2; θ, γ,Λ_1)= c_202 Λ_1/^(2-θ/2)(ln[1+ Λ_1^(s-1)/s_2^-(s-1)/s])^β,c_202 = exp(^-(c_200 θ/2)), = (θ) ≥ 1.Proof.Let the cut-off function η(x)∈ C^2_0(B(z, r_0/16+/2)) be equal to one in B(z,r_0/16) and η_C^i(M)≤ Cγ^-i,i=0, 1,2. Then w_η(x,t)=(1-η(x)) w(x,t) vanishes in Γ and we have (_t^2- Δ) w_η(x,t)=F, whereF(x,t) = (Δ_g η(x) ) w(x,t)+ 2g(∇η(x),∇_x w(x,t)) = (Δ_g η(x) ) (η̃(x) w(x,t))+ 2g(∇η(x),∇_x ( η̃(x) w(x,t))) := F_1+F_2Here η̃(x)∈ C^2_0(B(z, r_0/16+)) is equal to one in B(z, r_0/16+ γ/2) and η̃_C^i(M)≤ Cγ^-i,i=0, 1,2. Clearly, by hypothesis F_1_L^2(M × (-T +r_0/16,T-r_0/16))≤C ^-2_2. To estimate F_2, observe that,η̃w_H^s(M × (-T+r_0/16, T-r_0/16))≤ C ^-sΛ_1, where we have also used (<ref>). Sinceη̃w_L^2(M × (-T+r_0/16, T-r_0/16))≤_2,by interpolation arguments, we getη̃w_H^1(M × (-T+r_0/16, T-r_0/16))≤ C ^-1Λ_1^1/s_2^1-1/s Since supp(∇η) ∩supp(∇η̃) =∅, this impliesF_2_L^2(M × (-T +r_0/16,T-r_0/16))≤C ^-1Λ_1^1/s_2^1-1/s, F_L^2(M × (-T+r_0/16, T-r_0/16))≤ C ^-2Λ_1^1/s_2^1-1/s,where we used _2 ≤Λ_1. As s>1, we have w_η_H^1(M × (-T+r_0/16, T-r_0/16))≤ C ^-1Λ_1. Using growth properties of the function f_θ of form (<ref>), it follows from Theorem <ref> thatw_η_H^1-θ/2()≤ Cc_206(,θ)^θ/2^-1Λ_1/(ln[1+ Λ_1^(s-1)/s_2^-(s-1)/s])^β.Now observe that by the trace-theorem, for any >1/2 there exists =() such that, for r ≥ r_0/16,z ∈ M: w( ,0)_L^2(B(z, r))≤ γ^-w_H^( B(z, r)× (-,)), w( ,0)_L^2(B(z, T-2 ))≤ γ^-w_H^ ((z,γ,T)).It follows from(<ref>) with =1-θ/2 and (<ref>) that, w_η(,0)_L^2(B(z,T-2γ))≤ Cc_206(,θ)^θ/2Λ_1/^2-θ/2(ln[1+ Λ_1^(s-1)/s_2^-(s-1)/s])^β. Next define =(1-β) s +β >1/2. Then byinterpolation, η w_H^( B(z, r)× (-,))≤ c_201 η w^/s_H^s( B(z, r)× (-,)) η w^(s-)/s_L^2(B(z, r)× (-,)).Using the fact that supp(η) ⊂ B(z, r_0/16 +), we can apply (<ref>) with r=r_0/16 +, the previous inequality and (<ref>), to obtain η(·) w(·, 0)_L^2(B(z,T-2γ))≤^- c_201(Λ_1)^/sϵ_2^β(s-1)/s ≤ ^β- c_201^/sΛ_1(ln[1+ Λ_1^(s-1)/s_2^-(s-1)/s])^-β.Here at the last step weuse the fact thatX ≥ln(1+X) for X >0, with X = γΛ_1^(s-1)/s_2^-(s-1)/s. Recallthat v(x)=w_η(x,0)+η(x) w(x,0). Comparing (<ref>) and (<ref>), we obtain equation (<ref>). Thecoefficient c_202 defined in (<ref>)fulfills the inequalityc_202≥ Cc_206(,θ)^θ/2^θ/2-2 +c_201 ^(1-β)+β/s^(β-1)s,by using (<ref>) and a proper multiplicative coefficientindependent on γ. § COMPUTATION OF THE PROJECTION§.§ Domains of influenceLet (M,g,p)∈_ n(R, D, i_0). By Proposition <ref>, we can choose L-1 points z_j, j=1,2,…,L-1 that form a τ_0-net in B_M(p,r_0/4). Here, L is bounded by a uniform constant. InLemma <ref> we showed that for any σ there areN_1(σ) points, that we enumerate asz_L, …, z_ L-1+N_1(σ), which form a maximal σ-separated net in B(p, r_0/4)and the ballsB(z_k, 4 σ), k=L,…, L-1+N_1(σ), satisfy thefinite intersection property with at mostN_F intersections. In this section we consider arbitrary σ, which value will be specified later, and pointsz_ℓ,ℓ=L, …, L-1+ N_1(σ) that satisfy the conditions of Lemma <ref>. Also, below is 3/2<s<2.Ournext goal is to approximately construct the values of the distance functions from a variable point x ∈ M to all points z_ℓ∈ B(p,r_0/4), ℓ=1,2, …, L-1+N_1(σ), defined in Lemma <ref>.The main step is to approximately compute the Fourier coefficients of thefunctions of form χ_Ω (x)v(x), where χ_Ω(x) are the characteristic functions of some special subdomains Ω⊂ M and v(x) hasa finite Fourier expansion. These subdomains Ω are defined using distances to L points {z_1, …, z_L-1, z_i }, where i ∈{ L, …, L-1+N_1(σ)} is arbitrary. Fori ∈{ L,L+1, …, L-1+N_1(σ)}, let K_i={1,2,…,L-1}∪{i} and define 𝒜^(i) to be the set of those =(_ℓ)_ℓ=1^ L-1+N_1(σ)∈^ L-1+N_1(σ), such thatr_0/8≤_ℓ≤ 2D, if ℓ∈ K_i, _ℓ=0, if ℓ∉K_i.Below, we will assume that γ≤σ.We denoteΓ̃(z, T)=B(z,r_0/16+γ) × (-T+r_0/16, T-r_0/16). Next we fix for a while the index i ∈{ L, …, L-1+N_1(σ)}. To construct subdomains Ω, we start withobservation setsΓ̃(), ∈𝒜^(i), Γ̃()= ⋃_ℓ∈ K_iΓ̃(z_ℓ,_ℓ).At last, for b ∈, we defineM(, b )=⋃_ℓ∈ K_iB_M(z_ℓ, _ℓ+b).Then the corresponding domains of stable unique continuation are()= ⋃_ℓ∈ K_i(z_ℓ,, _ℓ),(,bγ)= ⋃_ℓ∈ K_i(z_ℓ,, _ℓ+bγ),and the corresponding double cones of influences are given byΣ()= ⋃_ℓ∈ K_iΣ(z_ℓ,, _ℓ),Σ(,bγ)= ⋃_ℓ∈ K_iΣ(z_ℓ,, _ℓ+bγ), We have the following volume estimate.a) Let ∈𝒜^(i),i= L,…,L-1+N_1(σ) andA=A(, ) = {x∈ M: d(x, M(, 3))≤ 5}.Then, there is a uniformconstant >0, depending only on n, R, D, i_0 and r_0, such that(A)≤ L . b) Consequently, by defining b(s), for 3/2 <s <2,asb(s) = 1/2,n=2, 3 and b(s) = s/n, n ≥ 4, we see that there is a uniform constant (s), depending only ons, n, R, D, i_0 and r_0, such that χ_ B(z_ℓ, _ℓ+ 8) ∖ B(z_ℓ, _ℓ- 2 ) v_L^2(M)≤(s)^b(s)v_H^s(M).Proof.a) Let x ∈ A. Then, for someℓ∈ K_i,x ∈ B(z_ℓ, _ℓ+ 8) ∖ B(z_ℓ, _ℓ-2 ).Since d exp_z_ℓ|_ v is uniformly bounded on _ n(R, D, i_0) for v∈ T_z_ℓM, | v| ≤ 2D, ( B(z_ℓ, _ℓ+ 8) ∖ B(z_ℓ, _ℓ-2 ) ) ≤ C, for all ℓ∈ K_i. b)Similar to part a), we have (B(z_ℓ, _ℓ+ 8) ∖ B(z_ℓ, _ℓ- 2) )≤ c .Together with the Hölder inequality andthe Sobolev embedding H^s(M) → L^q(M), 1/q =1/2 -s/n,(or C^0(M) for n= 2, 3), this implies (<ref>). Note that (s) is a uniform constant as the embedding can be done in harmonic coordinates defined in balls with uniform radius. §.§.§ Cut-off estimates and finite dimensional projectionsLet us applyLemma <ref>, withinstead of σ, to obtain points x_ℓ∈ M, ℓ=1,2, …, N() such that the balls B(x_ℓ, 2 ), ℓ=1,2, …, N() are a covering of M.Let ψ_ℓ:M→_+,ψ_ℓ∈ C^6_*(M) bein harmonic coordinates a partition of unity for the covering B(x_ℓ, 2 ) that satisfy ψ_ℓ_C^k,β(M) ≤c_k,β^-(k+β),k=0, 1, 2, 0≤β <1; (ψ_ℓ) ⊂ B(x_ℓ, 2),∑_ℓ=1^N()ψ_ℓ(x)=1. Below, we use Λ_s≥ 1. For 3/2<s<2 there is(s)≥1, in (<ref>) such that, for any u∈^̋s_Λ_s(M),i ∈{L, …,L-1+N_1(σ)} and ∈𝒜^(i), the following holds true: Thereu_∈^̋s_1/4(s;) Λ_s(M) ∩^̋0_Λ_s(M), u_(x)=0,if x∈ M(, ), u_(x)=u(x),if x∈ M ∖ M(, 7),where (s, )=(s) ^-s.Proof. Defineu_(x)= Ψ(x)u(x),Ψ(x)= ∑_supp(ψ_ℓ)∩ M(,3)=∅ψ_ℓ(x).For a general w∈ H^2(M) we have the following estimate in Sobolev spaces with 3/2<s<2Ψ w_H^s(M)≤ CΨ_C^s(M)w_H^s(M)≤ C mc_2,0γ^sw_H^s(M),where m is the number of elements in the set {ℓ: supp(ψ_ℓ)∩ M(,3)=∅} satisfyingm≤ N(). Thus the existence of(s) such that the claim holds follows then from the finite intersection property of B(x_ℓ, 2 ), see Lemma <ref>, and estimates(<ref>).§.§ Unique continuation for approximate projectionsCorollary <ref>implies the following result.Note that the notations _2 and ℰ_2 are introduced in order to distinguish _2 from its upper bound ℰ_2, written as an expression dependent on _1. Later, in formula (<ref>) we set_1to have a specific value and substitute it in the expression ℰ_2( _1/4L; θ, ,Λ_s)of formula (<ref>) to obtain a specific value for _2.Assume that v satisfies v_H^s(M)≤ (s, ) Λ_sandv_L^2(M)≤Λ_s,with (s, ) defined in (<ref>), and assume (<ref>). Let _1 <Λ_s and _2 ≤_2(_1/4L;θ, , Λ_s)whereℰ_2( _1/4L; θ, ,Λ_s)= Λ_s ^s/(s-1)/(exp[( Λ_s 4L _1^-1γ^-(2-θ/2) exp( γ^-c_200) )^1/β])^s/(s-1) Let w= W(v) satisfyw_L^2(Γ̃(z_ℓ,_ℓ))≤_2on the domain (<ref>). Then, for ℓ∈ K_i, w(0, ·)_L^2(B(z_ℓ, _ℓ- 2))≤_1/4L,w(0, ·)_L^2(M(,- 2))≤1/4_1.Proof.From a small modification of the proof of Corollary<ref> we still can obtain the estimate (<ref>) in the following way.The main point is to replace the initial condition v_H^s(M)≤Λ_1 with (<ref>). We then deduce the corresponding estimate for the solution w=W(v) of the wave equation, with T=_ℓ and z=z_ℓ, w_H^s(M×[-T,T])≤ C(s, ) Λ_sandw_L^2(M×[-T,T])≤CΛ_s, Let η andη be the smooth localizers defined in the proof of Corollary<ref>.Calling again w_η = (1-η(x))w and using the definition ofin (<ref>) we get,η w_H^s(M× (-T+r_0/16,T-r_0/16))≤ C γ^-sΛ_s,ηw_H^s(M× (-T+r_0/16,T-r_0/16))≤ C γ^-sΛ_s, w_η_H^s(M× (-T+r_0/16,T-r_0/16))≤ C γ^-sΛ_s, and the intermediate H^m norms followby interpolation.Here the constant C isdependent ofc_3(s) and independent of . Consequently, η̃w_H^1(M × (-T+r_0/16, T-r_0/16))≤ C ^-1Λ_s^1/s_2^1-1/s,F_L^2(M × (-T+r_0/16, T-r_0/16))≤ C ^-2Λ_s^1/s_2^1-1/s.Using growth properties of the function f_θ we get (<ref>). Also (<ref>) still holds. Therefore we obtain (<ref>), where the new constantin (<ref>) now depends on c_3(s).Next we observe that formula (<ref>) implies that when_1=4Lℰ_1(_2; θ, γ, Λ_s), we have _2 = Λ_s γ^s/(s-1)/( exp[ (Λ_s 4L_1^-1γ^-(2-θ/2)exp(^-(c_200 θ/2)))^1/β] -1 )^s/(s-1),and ℰ_2 is definedby removing -1 from the denominator of the expression above, and by replacing exp(^-(c_200 θ/2)) with exp(^-c_200). This is done to simplify the calculations of the paper. The relation (<ref>) follows by imposing on _2 the ℰ_2-bound. Under the conditions of the Corollary and from the growth properties of ℰ_2(_1) it follows that _2 ≤ℰ_2( _1/4L; θ, ,Λ_s) ≤_1/4L, _1 ∈ (0, Λ_s].§.§ Approximate projectionsLet _0,_1,_2 satisfy _0 ≤Λ_s/10,_1=_0^2/10 Λ_s,_2 =_2(_1/4L, θ, , Λ_s ). §.§.§ Finite data with and without errorsBelow we will use several parameters, and for the sake of clarity of presentation, we have gathered these parameters in this subsection and tell how those will be used. Below,we will use ∈_+ satisfying≥ (_2/8;, Λ_s), where ( _*;, Λ_s)= ^-n(Λ_s/_*)^n/sand(s)= (s)^n/s^n/2(+1)^n/s.We also use∈_+ satisfying≤≤2^n/2()^n Moreover, we useδ ≤ δ_0(_2, , , Λ_s)=1/ _2/Λ_s,where =min(^-1 , (1+ 2 )^-1/2/100(1+D)^3/2L), and J satisfying( ^-1δ^-1)^n/2≤ J≤ (2δ^-1)^n/2,cf. Remark <ref>. Note that (<ref>) implies that λ_J≥δ^-1, see Def. <ref> (ii) and (<ref>).The use of the above parameters are the following. We will assume that we are giventhe ball (B_e(r_0), g^a) and the pairs {(λ_j^a, φ_j^a|_B_e(r_0)) ;j=0,1,2,…,J}.We assume that these data are δ-close to FISD of some manifold (M,g,p)∈_ n, that is, the ball (B_e(r_0), g) and{(λ_j, φ_j|_B_e(r_0)) ;j=0,1,2,…,J}, where the error size parameter δ satisfies (<ref>). We are going to formulate a minimizaton algorithm that will be used to computevolumes of the sets (<ref>).We consider this minimizaton algorithm in the two cases, in the case when we have FISD without errorsand the case when we have it witherrors. As we have finite data, we need to consider the projection of the solution of the wave equation to finitely many eigenvectors, and we chooseso that it is enough to useeigenvectors. This requires that we have the data(λ_j, φ_j|_B_e(r_0)) with j=0,1,2,…,. However, to consider minimization algorithms both for FISD with and without errors, we need toincrease the amount of data and we will consider(λ_j, φ_j|_B_e(r_0)) with j=0,1,2,…,, whereis chosen as follows: In Definition <ref>, there are intervals I_p⊂,p=0,1, …, P covering the spectrum of M in[0, δ^-1+ δ] eachI_p containing a cluster ofn_p eigenvalues λ_j and approximate eigenvalues λ_j^a.To consider these clusters of eigenvalues, let P_0 be the smallest integer P_0≤ Psuch that{λ_0,λ_1,…, λ_}⊂⋃_p=0^P_0 I_pand then choosesuch that≤≤ J and j≤λ_j∈⋃_p=0^P_0 I_p, j> λ_j∉⋃_p=0^P_0 I_p.We note that this happens with some satisfying (<ref>).We also observe that asδsatisfies (<ref>)and J satisfies (<ref>), and as Λ_s≥ 1, _2<1 and n≥ 2, we haveJ≥ J_0(δ) =(^-1δ^-1)^n/2≥. §.§.§ Minimisation with FISD without errorsLet _0,_1, _2 satisfy (<ref>).There is _0(_0;s, Λ_s) depending only on _0,s,Λ_s, n, R, D, i_0 and r_0,with the following properties: Let ≤_0(_0;s, Λ_s).Assume thatsatisfies (<ref>) andsatisfies (<ref>), and u(x)=∑_j=0^ a_jφ_j(x) ∈^̋s_Λ_s(M), Let i ∈{L, …, L-1+N_1(σ)} and ∈𝒜^(i).Moreover,assume that we are given (B(p, r_0),g|_B(p, r_0)),((λ_j, φ_j|_B(p, r_0)))_j=0^and(a_j)_j=0^. The data (<ref>) determine the set ^*_ m and the function Ł_ a:C^*_ m→, defined in (<ref>) and (<ref>), for which the minimizer of L_ a in ^*_ m isa sequence(d_j)_j=0^=(d_j(α,i))_j=0^∈^ such that v(x)=∑_j=0^d_j^φ_j(x) satisfies v ∈^̋s_ (2 (s, ) Λ_s)(M) ∩^̋0_2 Λ_s(M), v-χ_M(, -2)u_L^2(M)< _0,Theabove bound_0(_0; s, Λ_s) for γ is defined in (<ref>). Note that the sequence (d_j)_j=0^ is not unique and that the theorem states the existence of sequences satisfying (<ref>). The next subsections are devoted to the proof of Theorem <ref>. In sec. <ref>, <ref> and <ref> we keep the index i ∈{L, …, L-1+N_1(σ)} fixed not referring to this. §.§.§ Finite dimensional projectionsNext we introduce some special sets of the finite-dimensional functions.Let b=(b_j)_j=0^∈^(+1) and^*( b) be its Fourier coimage ^*( b)=∑_j=0^ b_jφ_j ∈ L^2(M).For a_1,a_2>0the class of Fourier coefficients _,s(a_1, a_2) is defined as _,s(a_1,a_2):={ b∈^(+1); ∑_j=0^(1+λ_j^2)^s|b_j|^2≤ a_1^2,∑_j=0^ |b_j|^2≤ a_2^2}.For w=W(v) being the solution to theproblem (<ref>) and b∈^(+1),we denote 𝒲( b)=W(^*( b)) ∈ C(;L^2(M))and, for any _* >0, ∈𝒜^i, we denote _, s( _*;a_1,a_2, α) = { b∈_ ,s(a_1, a_2) : W(^*( b))_L^2(Γ̃(z_ℓ, _ℓ ))≤_*,∀ℓ∈ K_i}.(i) Let v∈^̋s_ (s, ) Λ_s(M)and let P_j' be the orthoprojection P_j'v=∑_j=0^j'⟨v|,φ_j_L^2(M) φ_j. Then for any ∈𝒜^(i), _2>0, P_j'v-v_L^2(M)≤1/ 8(+1) _2, if j' ≥≥(_2/8;, Λ_s),see (<ref>) forand (<ref>) for( _2/8;, Λ_s). (ii) Let u ∈^̋s_Λ_s(M) and u_ begiven by (<ref>). Letsatisfy (<ref>)-(<ref>). Then, v_=P_ u_α∈^*(_, s( 1/8_2; 1/4(s;) Λ_s,Λ_s,α)).Proof. (i) For v = ∑_j=0^∞ b_jφ_j, we haveP_j'v-v^2_L^2(M)= ∑_j >j' |b_j|^2 ≤ |_j'|^-s(s; )^2Λ_s^2. Here, (s; ) is defined in (<ref>) and (<ref>) with s=0, and theseimpy the estimate(<ref>). (ii) Thefinite propagation speed of waves implies, due to u_|_ M(,)=0, that W( u_)|_Γ̃(z_ℓ, _ℓ)=0. By Lemma <ref> and (<ref>) W( v_)_L^2(Γ̃(z_ℓ, _ℓ))≤ W( v_-u_)_L^2(Γ̃(z_ℓ, _ℓ))≤v_ - u__L^2(M)≤1/8_2.Since P__H^s(M)=1for any s, the claim (i) of the lemma withj'=, (<ref>) together with(<ref>) prove (<ref>).The condition W(^*( b))_L^2(Γ̃(z_ℓ, _ℓ ))≤_*, see (<ref>), is equivalent to (∑_j=0^b_jcos(√(λ_jt)) φ_j(x))_L^2(Γ̃(z_ℓ, _ℓ))≤_*, ℓ∈ K_i. whichcan be directly verified if we know{(_j, φ_j|_B(p, r_0))}_j=0^.§.§.§ Minimisation algorithmAssume that we aregiven a=(a_j)_j=1^∈^(+1) and denoteu= ^*( a) ∈^̋s_Λ_s(M). Our next goal is to useFISD to find a vectorb ∈_, s((s, )Λ_s,Λ_s) such that ^*( b) is closeto χ_M()^*( a). To achieve this goalwe will use a minimisation method. Let _0,_1, _2 satisfy (<ref>).Let m∈{1,2,4} be a parameter we will use below, and _ m:=^*(^*), where^*_ m=_, s(1/2m_2; 1/m(s, )Λ_s, Λ_s, α).(i) A function v ∈_ m is called an _1-minimizer of the minimization problem min_h∈_ mŁ_u(h),where Ł_u(h)= h-u^2_L^2(M),if v satisfies v-u_L^2(M)≤ J_min(m)+ 5 Λ_s_1, J_min(m):=inf_h∈_ mh-u_L^2(M)^2.(ii)Equivalently, a vector b=(b_j)_j=0^∈^*_ mis an _1-minimizer of the minimization problem min_ c∈^*_ mŁ_ a( c),where Ł_ a( c)=c- a^2_^(+1),ifb- a_^(+1)^2≤ J_min(m)+ 5 Λ_s _1, J_min(m):=inf_ c∈^*_ m c- a_^(+1)^2. Observe that for c ∈_,s(1/m(s, )Λ_s, Λ_s) we can check, using Remark <ref> with _*= _2 /2m, that c∈^*_ m and thus find b which satisfies (<ref>). Next we assume that, in addition to_2 satisfying (<ref>),satisfies ≤γ_0=_0(_0; s, Λ_s)=(_1/Λ_s)^1/(b(s)),with (s)= 1/( 2 L (s) )^1/(2b(s)),_1 = _0^2/10Λ_s,where b(s) and (s) are defined in Lemma <ref>, b).Let u∈^̋s_Λ_s(M), and let_0,_1,_2,, satisfy (<ref>), (<ref>)-(<ref>) and(<ref>).(i) For m∈{1,2,4} and all h∈_ m, we have Ł_u(h) ≥u_L^2(M(, -2))^2-2 Λ_s _1 + h-u_L^2(M∖ M(, -2))^2.(ii) The function v_ defined by (<ref>), (<ref>)satisfiesv_∈_ m with m=4 and and Ł_u( v_) ≤u^2_L^2(M(, -2))+2 Λ_s_1 +4 _1^2. Note that here v_∈_ 4⊂_ 2⊂_ 1. (iii) For all m∈{1,2,4}, the function v_∈_ m is an _1-minimiser, Ł_u(v_)≤ J_min(m)+ 5Λ_s _1 .(iv) For all m∈{1,2,4}, we have |J_min(m)- u^2_L^2(M(, -2))|≤ 2 Λ_s_1 +4 _1^2.Proof. (i) We have, for h∈_ m, h-u_L^2(M)^2= h-u_L^2(M(, -2))^2 + h-u_L^2(M∖ M(, -2))^2 ≥ (u_L^2(M(, -2))-h_L^2(M(, -2)))^2 + h-u_L^2(M∖ M(, -2))^2.Since h ∈_ m,(<ref>), (<ref>) and (<ref>) imply that h_L^2(M(, -2))≤_1. Thus, h-u_L^2(M)^2≥ u_L^2(M(,-2))^2-2Λ_s _1 + _1^2 + h-u_L^2(M∖ M(,-2))^2. (ii) With u_,v_ defined by (<ref>) and(<ref>), v_∈_ 4,u- v_^2_L^2(M) = u- v_^2_L^2(M(, -2))+u- v_^2_L^2(M∖ M(, -2 )) ≤u_L^2(M(, -2))^2 +2Λ_s _2+_2^2 +2u- u_^2_L^2(M∖ M(, -2 )) + 2u_- v_^2_L^2(M∖ M(, -2 )),where we use that u- v_=(u- u_) +( u_ - v_) and v__L^2(M(, -2 ))^2≤_2^2, see Lemma <ref>. Observe, that by (<ref>), (<ref>) and (<ref>), u-u_^2_L^2(M ∖ M(, -2))= u^2_L^2(M(,7) ∖ M(, -2))≤(s) Λ_s^2 L^2^2b(s)≤1/2_1^2.where (s) is defined in Lemma <ref>, b). Using (<ref>) and (<ref>), we see thatu_-v_^2_L^2(M∖ M(-2 ))≤_2^2. Thus, inequality(<ref>)yields thatŁ_u(v_) ≤u^2_L^2(M(, -2))+2 Λ_s_2+ 3_2^2 +_1^2. As_2 ≤_1, see (<ref>),we get(<ref>). (iii) The claims (i) and (ii) together with (<ref>) yield that Ł_u(v_)- J_min(m)=Ł_u(v_) -min_h∈_ mŁ_u(h)≤( u^2_L^2(M(, -2))+2 Λ_s _1 + 4 _1^2 )- (u_L^2(M(, -2))^2-2 Λ_s _1 ) ≤ 5 Λ_s _1.(iv)The claim (iv) follows from (i) and (ii). Let m∈{1,2,4} andu∈^̋s_Λ_s(M), _0,_1, and _2 satisfy (<ref>) and(<ref>), satisfies (<ref>)-(<ref>) andsatisfies (<ref>). Let v^*= ∑_j=0^ b_j φ_j be any_1-minimizer ofthe minimization problem (<ref>), with b ∈^*_ m. Thenv^*-χ_(M ∖ M(, -2))u^2_L^2(M)≤_0^2.Proof. Since v_satisfies by (<ref>) and(<ref>),v^*-u_L^2(M)^2 ≤ v_-u_L^2(M)^2 +5 Λ_s _1≤u_L^2(M(, -2))^2 +7 Λ_s _1+ 4 _1^2.Since v^*-u satisfies (<ref>), this inequality implies thatv^*-u_L^2(M∖ M(,-2))^2 ≤ 9 Λ_s _1+ 4 _1^2.Since v^*∈, w^*=W(v^*) satisfies(<ref>) with^*=_2, where _2 satisfies (<ref>) and(<ref>). It then follows from Corollary <ref> that v^*^2_L^2(M(, -2))≤_1^2.Due to (<ref>), this inequalitytogether with(<ref>), implies(<ref>). Proof of Theorem <ref>. Assume that a:=(a_j)_j=0^ satisfies the hypothesis. First determine (b_j)_j=0^ so that v^*=∑_j=0^b_jφ_j(x) isan _1-minimizer of (<ref>),v^* ∈_, s(1/m(s, ) Λ_s, Λ_s) with m=1. Then, by (<ref>), χ_M(, - 2)u-∑_j=0^(a_j-b_j)φ_j _L^2(M)< _0.Take d_j=a_j-b_j. Thenv(x)= ∑_j=0^ d_j φ_j(x) satisfies (<ref>). §.§.§ Minimisation with finite interior spectraldata with errorsIn this section we consider an approximate constructionwhen thereis a δ-errorin FISD.We assume that we are given the ball (B_e(r_0), g^a) and the pairs(λ^a_j, φ^a_j|_B_e(r_0)) with j=0,1,2,…,J.We assume that these data are δ-close to ISD of some manifold (M,g,p)∈_ n in the sense of Definition <ref> with intervals I_p⊂,p=0,1, …, P covering the spectrum of -Δ_g in[0, δ^-1+ δ]. We will use parameters ,∈_+ and P_0∈_+ satisfying(<ref>)-(<ref>), (<ref>), and(<ref>). Note that then≤≤ J and that below we will use(λ^a_j, φ^a_j|_B_e(r_0)) with j=0,1,2,…,. Denote𝒥_p={j∈_+; λ^a_j∈ I_p} and n_p is the number of elements in 𝒥_p.Then, forany p there is A^p∈ O(n_p), p=1,2,…,P_0 such that, if j ∈𝒥_p thenφ_j= ∑_k ∈𝒥_p A^p_jkφ_ksatisfies φ_j-φ_j^a_L^2(M)<δ, where φ_k are the eigenfunctions of Δ_g. Note that ∑_p=0^P_0 n_p =+1.We use below the matrix E∈ O(+1), E=[e_jk ]_j, k=0^, e_jk=⟨|φ_k, φ_j_L^2(M) and note that e_jk=0if _j, _k do not lie in the same I_p.Let b =(b_0, b_1, …, b_) ∈^+1 then, for b^a=E( b), b^a =(b_0^a, b_1^a, …, b^a_) we have ∑_j=0^ b_j^a φ_j(x)=∑_j=0^ b_jφ_j(x).Also, let ω_jbe the center point of the interval I_p containingλ_j^a so that |λ_j^a-ω_j|<δ. The main goal of this section is to proveLet_0,_1,_2satisfy (<ref>).Letsatisfy(<ref>),satisfies (<ref>) andsatisfy (<ref>),δ satisfies (<ref>),and let J satisfiesJ_0(δ)≤ J ≤ 2^n/2^n J_0(δ) , where J_0(δ)=(2δ)^-n/2.Thenthe following is valid:Letz_1,…,z_ L-1+N_1(σ)∈ B(p, r_0/4) be a σ-net. Assume that g^a|_B_e(r_0) and ((λ_j^a,φ_j^a |_B_e( r_0)))_j=0^ isδ-close toFISD g|_B(p,r_0)and((λ_j,φ_j |_B(p,r_0)))_j=0^ ofa manifold (M,g,p)∈_ n. Also, assume that ã= (ã_j)_j=0^ satisfies ∑_j=0^⟨|_j^a ^s|ã_j|^2 ≤Λ_s^2, and ũ^a(x)= ^*(ã)=∑_j=0^ã_jφ_j(x), for x∈ M.Let ∈𝒜^(i).Assume that we are given g^a|_B_e(r_0), ((λ_j^a,φ_j^a |_B_e( r_0)))_j=0^,and (ã_j)_j=0^.Let i ∈{L, …, L-1+N_1(σ)} and ∈𝒜^(i).The data (<ref>) determine the set ^a,*_2 and the function Ł_ a:^a,*_2→, defined in (<ref>) and (<ref>),for which the minimizerof Ł_ a in^a,*_2 is a sequence d^a=d^a(,i)= ( d^a_j(,i))_j=0^∈^ ,such that v^a(x)= ^*(d^a)=∑_j=0^d_j^aφ_j(x), x∈ M, satisfies, cf. (<ref>), v^a ∈^̋s_ 2(s, ) Λ_s(M) ∩^̋0_ 2Λ_s(M), v^a-χ_M(, -2)ũ^a_L^2(M)< _0. §.§.§ Proof of Theorem <ref> The rest of this section will be devoted to the proof of Theorem<ref>.Similar to (<ref>),we introduce^*( b^a)= ∑_j=0^ b_j^aφ_j(x),x ∈ M;andwave-type functions w^a(x, t)=^a(b^a):=∑_j=0^ b^a_j cos(√(λ^a_j) t) φ^a_j(x),x ∈ B(p, r_0);w(x, t)=( b^a)(x, t)= W(^*( b^a))(x, t), x ∈ M;w(x, t)= ∑_j=1^b_j^a cos(√(ω_j) t ) φ_j(x),x ∈ M;w^a(x, t) =^a( b^a):= ∑_j=1^b_j^a cos(√(_j^a) t ) φ_j(x),x ∈ M,where we recall that W is defined by W(v)=w where w satisfies (<ref>), and ^a_ , s(a_1, a_2) ={b^a ∈^( +1): ∑_j=0^⟨|_j^a ^s | b_j^a|^2 ≤ a_1^2,∑_j=0^| b^a_j|^2 ≤ a_2^2} ^a_, s(_*;a_1, a_2, α)= {b^a ∈^a_ , s(a_1, a_2) ;^a(b^a) _L^2(Γ̃(z_ℓ, _ℓ))≤_*, ℓ∈ K_i }.We note that (see (<ref>) and (<ref>)) ^*( b^a)= ^*(E^-1 b^a),( b^a)=(E^-1 b^a). Let b^a ∈^a_ , s((, s) Λ_s, Λ_s). If δ <1 satisfies(<ref>) then,w-w^a_L^2(B(p, r_0)× (-2D, 2D))≤1/ 4_2 . Proof.Due to (<ref>) and (<ref>), for j, k ∈𝒥_p, |√(λ^a_j)-√(ω_k)| ≤ 2 √()δ,φ_j-φ_j^a_L^2(B(p, r_0))≤δ,φ_j^a_L^2(B(p, r_0))≤ 2.Using this, we obtain for |t|≤ 2D the following estimates. First, the Schwartz inequality implies thatw^a(·, t)-w^a(·, t)_L^2( B(p,r_0)) ≤(∑_j=0^ |b^a_j|)δ≤ (+1)^1/2( ∑_j=0^ (b^a_j)^2)^1/2δ≤ 2 Λ_s ()^1/2δ.Also, we see thatw^a(·, t)- w̃(·, t)^2_L^2( B(p,r_0)) ≤ ∑_j=0^ (cos (√(_j^a) t )-cos (√(ω_j) t ))^2 (b^a_j)^2≤(2D)^2 δ^2Λ_s^2 =4D^2 Λ_s^2δ^2 . We have w(x, t)=∑_p=0^P∑_j,k ∈𝒥_p b_j^acos(√(ω_k) t ) A^p_jkφ_k(x) =∑_k=0^ (∑_j ∈𝒥_p A^p_jk b_j^a) cos(√(ω_k) t ) φ_k(x)and w(x, t)= ∑_p=0^P∑_j,k ∈𝒥_p b_j^acos(√(_k) t ) A^p_jkφ_k(x) =∑_k=0^ (∑_j ∈𝒥_p A^p_jk b_j^a) cos(√(_k) t ) φ_k(x),and as A^p are orthogonal matrices and |√(_k)-√(ω_k)| ≤ 2 ^1/2δ,we see similarly to(<ref>) and (<ref>) w(·, t)- w(·, t)^2_L^2( B(p,r_0)) ≤ 4D^2 Λ_s^2δ^2 . Combining the above estimates withδ<δ_0(_2, , , Λ_s)= 1/ _2/Λ_s and =min(^-1 , (1+)^-1/2/100(1+D)^3/2L), we obtain the claim. By Definition <ref> we haveE _, s(1/2a_1, a_2) ⊂^a_, s( a_1, a_2) ⊂ E _, s( 2a_1, a_2).Note that the ℓ^2-norms of the sequences (b^a_j)_j=1^ do not depend on eigenvalues and, therefore, the same holds for the exact and approximate data. Also, the ℓ^2-norms are invariantwith respect to the operations involving orthogonal matrixes.Definitions of the sets of sequences in (<ref>) and (<ref>), Lemma <ref> and formula (<ref>) imply thatE_, s(_*-1/4_2;1/2 a_1, a_2, α)⊂^a_, s(_*;a_1, a_2, α) ⊂E_, s(_*+1/4_2;2a_1, a_2, α) Let us use _*=1/2_2 and define ^a,*_m=^a_, s(1/m_m, 1/2(, s) Λ_s, Λ_s, ), m∈{1,2,4} Using the notations in (<ref>), we see thatE ^*_ 4⊂^a,*_2⊂ E^*_ 1.Consider the quadratic function Ł_ a:^+1→, Ł_ a( c)=c- a^2_^(+1),Ł_E a( c)=c-E a^2_^(+1).cf. (<ref>). Note that Ł_ a( c)=Ł_E a(E c). Let b^*∈^*_4 and b^a,*∈^a,*_2 be minimizers of Ł_a andŁ_Ea, respectively, that is Ł_a( b^*)=min_ b∈^*_4Ł_a( b) =:J_min(4), and Ł_Ea( b^a,*)=min_ b^a∈^a,*_2 Ł_Ea( b^a)=:J_min^a(2).Note thatwe do not anymore consider _1-minimizers, but the minimizers. Since ^* and ^a,* are bounded and closed set in ^ +1 suchminimizers exist. When _1<Λ_s/8, Lemma <ref> (iv) implies |J_min(4)-J_min(1)|≤ 2(2 Λ_s_1 +4 _1^2)<5Λ_s_1 Using(<ref>), (<ref>),and the fact that E is an isometry, we see thatJ_min(1)≤ J_min^a(2)≤ J_min(4),and J_min^a(2) ≤ J_min(1)+5Λ_s_1.These implies thatthe minimizer b^a,* of function Ł_Ea in the set^a,*_2 satisfies b^a,*∈^a,*_2⊂ E ^*_ 1 and so we have that b̃^*=E^-1 b^a,* is an _1-minimizer of Ł_a in the class ^*_ 1.We denote b̃^*=( b̃^*_j )_j=1^. Let a =E^-1ã so that^*(a)=∑_j=0^a_jφ_j(x)=u(x).Then, by applying Lemma <ref> we see thatv^*= ∑_j=0^b̃^*_j φ_jsatisfies (<ref>). Then, choosing d_j^a=ã_j-b^a,*_j, j=0, 1, …,, we see that ṽ^a=∑_j=0^ d_j^a φ_j satisfies (<ref>). This proves Theorem <ref>.Similarly to Remark <ref> and Theorem <ref>, we see that if the collection ofg^a|_B_e(r_0) and ((λ_j^a,φ_j^a |_B_e( r_0)))_j=0^Jis δ-close toFISD ofa manifold (M,g,p)∈_ n then without loss of generality, we can assume that J satisfies (<ref>). Indeed, the eigenvalues λ_jwith index j>2^n/2^n J_0(δ)are not used in the proof of Theorem <ref>.§ CONSTRUCTION OF THE APPROXIMATE INTERIOR DISTANCE MAPS.§.§ Volume estimates Our next goal is to approximately evaluate the volume of M(α), see (<ref>) with b=0.There are uniform constants _0^* >0, (s)>1, depending only ons, n, R, D, i_0 and r_0, such that the following holds:Let _0 ≤_0^*. Let_1, _2 be defined by (<ref>) while_0, be defined by (<ref>) and (<ref>)–(<ref>).Assume that we are given(g^a|_B_e(r_0); {(_j^a, φ^a_j|_B_e(r_0)) }_j=0^J)that isδ-close to FISD of (M,g,p)∈_ n. Here J satisfies by (<ref>). Let also i ∈{ L, …, L-1+N_1},where N_1=N_1(σ) is definedas in Lemma<ref> (ii).Assume thatσ≤τ_0/2where τ_0 is defined in Proposition <ref> and let ∈𝒜^(i) satisfy (<ref>).Then we can compute an approximate volume, ^a ((α)), ofthe set M() that satisfies |^a ((α))-((α))|≤(s) _0.Proof. Recall that φ_0(x)=(M)^-1/2, ℱ(φ_0)=(1, 0, 0, …),φ_0_H^s =1 for s >0.Theinterval I_0=(a_0, b_0) in Definition <ref> contains only_0=0. Thus φ_0^a|_B_e(r_0) is a δ-close to φ_0|_B_e(r_0)=φ̃_0|_B_e(r_0).These allow us to evaluate ^a(M) so that |^a(M) -(M)|< C δ. Using Theorem <ref> we evaluate the Fourier coefficients ( d^a_j)_j=0^ of v^a(x) which satisfies (<ref>) with ũ= φ_0. Let ^a( M())=^a(M)(∑_j=0^ ( d^a_j)^2)^1/2 Then, by (<ref>),|^a( M()) -((, -2 )) | ≤ C(_0+δ).Since |( M())- ( M(,-2)) |<L (cf. Lemma <ref>),(<ref>) implies estimate (<ref>),if _0 ≤_0^* with some uniform constants (s) and _0^*. Here _0^* is defined so that δ <_0, <_0 for_0 <_0^*, see (<ref>), (<ref>). Next weuse FISD with errors to approximately find the distances from various points x ∈ M to points z ∈ B(p, r_0/4). Themain tool is to approximately find the volumes of subdomains of M obtained bythe slicing procedure. For i ∈{L, …, L-1+N_1(σ)} and β∈𝒜^(i), M(β) are the domains defined in (<ref>) withreplaced by β. We consider the intersection of slices, ^*_ (i)(β) = ⋂_ℓ∈ K_i(B(z_ℓ, β_ℓ+2 σ)∖ B(z_ℓ, β_ℓ-2 σ))=(⋂_ℓ∈ K_i B(z_ℓ, β_ℓ+2 σ))∩(⋃_ℓ∈ K_iB(z_ℓ, β_ℓ-2 σ) )^c.Here for Ω⊂ M, Ω^c =M ∖Ω. Note that ( (⋂_ℓ∈ K_iΩ_ℓ) ⋂Ω^c )= ∑_ℓ∈ K_i(Ω_ℓ∪Ω) - ∑_ℓ≠ℓ'=1^n (Ω_ℓ∪Ω_ℓ'∪Ω) + … +(-1)^(L+1)( (⋃_ℓ∈ K_iΩ_ℓ ) ∪Ω) - (Ω).By (<ref>),M^*_ (i)(β) has form (<ref>) withΩ_ℓ= B(z_ℓ, β_ℓ+2 σ),Ω=⋃_ℓ∈ K_i B(z_ℓ, β_ℓ-2 σ).For any_1,_2∈𝒜^(i) we have (α_1) ∪(_2) = (_m), where (_m)_ℓ = max ((α_1)_ℓ, (_2)_ℓ).Therefore all terms in (<ref>) are of form (M()) for some∈𝒜^(i). Thus, using Lemma <ref>, we can approximately compute each term of (<ref>) with error _0. Since there are 2^L+1 terms in (<ref>), we obtain the following result.Under the conditions of Lemma <ref>, there exists _4(n, R, D, i_0, r_0) >0 and ∈ (0,1), depending only on n, R, D, i_0 and r_0, with the following property:Let0<_4 <_4(n, R, D, i_0, r_0).It is possible to evaluate approximate volumes, ^a(M^*_ (i)(β)), of the sets M^*_ (i)(β) of form (<ref>). Moreover,| ^a(M^*_ (i)(β))- (M^*_ (i)(β)) | ≤_4, if_0 ≤_4.§.§ Distance functions approximationA function r(·)∈ C^0, 1(B(p, r_0/4)) is aninterior distance function if there is x∈ M such that r(z) =r_x(z) = d(x,z), for any z ∈ B(p, r_0/4).The interior distance functions determine the interior distance mapR_M:(M, g) → L^∞(B(p, r_0/4)),R_M(x) = r_x(·).The map R_M or, more precisely, its imageR_M(M):= { r_x(·), x ∈ M }⊂ L^∞(B(p, r_0/4)),may be used to reconstruct (,g).Namely, in <cit.>, <cit.> it was shownhow to reconstruct (N,g|_N), where N=M ∖ B(p, r_0/50), from the knowledge ofboundary distance functions R_N(N) = {r^N_x(·) ∈ L^∞( N);x∈ N}, r_x^ N(z)=d_N(x,z),where d_N is the distance in N. Later, in Section <ref> we show thata Hausdorff approximation R^*_M to R_M(M) makes it possible to construct an approximation R^*_N to R_ N(N). Thus, our next goal is to construct a desired approximation R_M^*.To this end, we use the volume approximations of the previous subsection.First, for z,z'∈ B(p,r_0/2), we definean approximate distance d^a(z, z') using the metric g^a. Then Definition <ref> (iv) togetherwith convexity of B(p, r_0),see (<ref>), imply that|d^a(z, z') -d(z,z')| ≤σ, ifδ < σ.Recall that above we have used a parameter σ>0whichsatisfy σ≤τ_0/2and we have chosen points {z_1,…,z_ L-1+N_1(σ)}⊂ B(p, r_0/4) such that {z_1,…,z_L-1} is a τ_0-net in B(p, r_0/4). Moreover, the set {z_L,…,z_ L-1+N_1(σ)} is a maximal σ-separated set in B(p, r_0/4), see (<ref>). For any i∈{ L, …, L-1+N_1(σ)}and β=(β_ℓ)_ℓ=1^ L-1+N_1(σ)∈^ L-1+N_1(σ),r_0/8 < β_ℓ < 2D,cf. (<ref>), we define ^(i)(β)=β^(i)∈^ L-1+N_1(σ), whereβ^(i)_ℓ=β_ℓ, if ℓ∈ K_i, β^(i)_ℓ=0, if ℓ∉K_i.Then, ^(i)(β) ∈𝒜^(i).Observe that, for any x ∈ M ∖ B(p, 3r_0/8+σ) and ℓ=1, …, L-1+N_1(σ)there is β_ℓ(x)∈σ_+ such thatβ_ℓ(x)-σ≤ d_M(x,z_ℓ)≤β_ℓ(x)+σ. Therefore, B(x, σ) ⊂ B(z_ℓ, β_ℓ(x) +2 σ) ∖ B(z_ℓ, β_ℓ(x)-2σ), so that, due to (<ref>),(B(z_ℓ, β_ℓ(x) +2 σ) ∖ B(z_ℓ, β_ℓ(x)-2σ)≥1/σ ^n.Taking into account this inequality together with (<ref>) we require _4≤1/4σ ^n.Thus, for i∈{ L, …,L-1+N_1(σ)}, the volume and the approximate volume of the setM^*_ (i)(β^(i)(x)), β^(i)(x)=^(i)(β(x)) satisfy (M^*_ (i)(β^(i)(x))) ≥ 4 _4, ^a(M^*_ (i)(β^(i)(x))) ≥ 3 _4, where we use (<ref>).The above considerations motivate the following definition.In order to use only finitely many indexes β, in the following we are going to considerβ=(β_i)_ℓ=1^ L-1+N_1(σ) where β_i ∈σℤ_+, β_i≤ 2D. Let β =(β_ℓ)_ℓ=1^ L-1+N_1(σ)∈σℤ_+^ L-1+N_1(σ)⊂ℝ_+^ L-1+N_1(σ). Such sequenceβ is calledadmissible, ifr_0/8≤β_ℓ≤ 2D and for all indexesi∈{L,…,L-1+N_1(σ)},the modified index β^(i)=^(i) (β)∈𝒜^(i) satisfies ^a(M^*_(i)(β^(i))) ≥ 3_4.We define the set ℬ ={β∈σℤ_+^ L-1+N_1(σ); β}.For any x ∈ M∖ B(p, 3r_0/8+σ), there exists an admissible β∈σℤ_+^ L-1+N_1(σ) such that |d(x, z_ℓ)-β_ℓ| ≤ 2 σ, for ℓ∈{1,2,…,L-1+N_1(σ)}.Conversely, there is >0 depending only on n, R, D, i_0 and r_0, such that, if β is admissible, thenthere is x=x_β∈ M ∖ B(p, 3r_0/8- σ) such that, for all ℓ∈{1,2,…,L-1+N_1(σ)}, we have|β_ℓ-d(x,z_ℓ)|≤σ.Proof.The first statement follows from considerations before Definition <ref>.On the other hand, assume that β∈ℬ. Then equations (<ref>) and (<ref>) guaranteethat, for any i∈{L,…,L-1+N_1(σ)},there is x_i ∈ M^*_ (i)(𝒯^(i)(β)).Moreover, we have|d(x_i, z_ℓ) -β_ℓ| ≤ 2 σ,forℓ∈{1, …, L-1}∪{ i}. Moreover, in view of (<ref>), for j, k ∈{L, …, L-1+N_1(σ)},d_M(x_j, x_k) ≤ |H^L(x_j)-H^L(x_k)| ≤ 4 √(L)σ.Defining = 4 √(L)+3,and taking x=x_ i_1witharbitrary i_1, we see that x ∈M ∖ B(p, 3r_0/8-σ) and that(<ref>) is satisfied. For the points {z_ℓ: ℓ∈{1,…,L-1+N_1(σ)}⊂ B(p, r_0/4), let V_ℓ ={y∈ B(p, r_0/4): z_ℓ is the unique closest point to y in { z_ℓ'}} ,where ℓ=1, …, L-1+N_1(σ),be the correspondingVoronoi region. With anyβ∈ℬ we then associate a piecewise constant function r_β∈ L^∞(B(p, r_0/4)) by definingr_β(z)= β_ℓ, for z ∈ V_ℓ. Clearly,d_L^∞(M)(r_β,r_x) ≤σ, =+2.LetR^*_M, >={r_β(·): β∈ℬ }⊂L^∞(B(p, r_0/4)). Choose amaximalσ-net {x_1, …, x_ N_2(σ)}⊂ B(p, r_0/2) by adding to z_1, …, z_ N_0(σ) a σ-netz_ N_0(σ)+1, …, z_N_2(σ) in B(p,r_0/2) ∖ B(p, r_0/4).Again, using Lemma <ref>, we see thatN_2(σ) ≤σ^-n. Next we definer_k(z)=d^a(x_k, z_ℓ), for z ∈ V_ℓ,k=1, …,N_2(σ),ℓ=1,…, L-1+N_1(σ); R^*_M,<={r_k(·): k=1, …,N_2(σ)}⊂ L^∞(B(p, r_0/4)), R^*_M =R^*_M,> ∪ R^*_M, <.In Figure <ref>, we consider the setX={x_β : β∈ℬ}∪{x_1, …, x_ N_2(σ)}⊂ M.Thus, denoting = 2+2+1, see (<ref>) and(<ref>),we obtainWe haved_H(R_M(M), R^*_M) ≤σ,where d_H is the Hausdorff distance in L^∞(B(p, r_0/4)). §PROOF OF THEOREM <REF> AND PROPOSITION <REF>§.§ From interior distance functions to boundary distance functionsBy standard estimates for the differential of the exponential map, see <cit.> the diameter of the sphere B(p,r), r<r_0, is bounded( B(p,r))≤π r sinh( √(K) r)/√(K) r≤π rcosh(π/2)≤ 10r,where we use condition(<ref>). Let N=M∖ B(p,r_0/25). Letx ∈ M∖ B(p,r_0/4) and y∈ N and z∈ B(p,r_0/4), let f(y, x, z)=d_N(y,z)+d_M(z,x), f(y,x)=min_z_1∈ B(p,r_0/4)f(x, y; z_1), where d_Nand d_M are the distances in N and M, respectively. Then,d_N(y,x)=f(y,x)Proof. Clearly, as d_M(z,x)≤ d_N(z,x) andashortest curve in N from y to x intersects the sphere B(p,r_0/4), we see thatd_N(y,x)≥ f(y,x).On the other hand let z'= argmin_z(f(y, x; z)) and μ([0, f(y,x)] be the corresponding union of the distance minimizing paths from yto z' and from z' to x for which the minimum in (<ref>) is achieved. Denote s_1= d_N(y,z') and consider μ([s_1, f(y, x)]. Weshow next that μ([s_1, f(y, x)] ⊂ N.If this is not the case, there would exists s_1 < s_2 <s_3 <f(y,x) such that μ(s_1), μ(s_3) ∈ B(p,r_0/4), μ(s_2) ∈ B(r_0/25) and μ[s_3, f(y,x)] ⊂ M ∖ B(p,r_0/4). Then,s_1 ≥ r_0(1/4-1/25), s_2 -s_1 ≥r_0(1/4-1/25),s_3 -s_2 ≥r_0(1/4-1/25).On the other hand, consider a curve μ'([0, l]) which is parametrised by the arclength and consists of the radial path from μ(s_3) to y' ∈ B(r_0/25) followed by a shortest path along B(r_0/25) from y' to y. Due to (<ref>) and (<ref>),l ≤ r_0(10/25+1/4-1/25) <3 r_0 (1/4-1/25) ≤ s_3.Taking the union of the path μ'([0, l]), connectingμ(s_3) to y', and the pathμ(s_3, f(y, x)), connecting y' to x, we get a contradiction to definition (<ref>). Thus, μ([s_1, f(y, x)]) ⊂ N, i.e.,d_N(y, x) ≤ f(y, x). Next, using the already constructed set R^*, see (<ref>) together with Lemmata <ref> and <ref>,we construct a set R^*(N) ⊂ L^∞( N)whichapproximates R^ N(N)= {r_x^ N∈ L^∞( N):x∈ N}, where r_x^ N(z)=d_N(x,z),forz∈ B(p, r_0/25). Let R^*be the set given in (<ref>), which satisfies (<ref>) be given. Then it defines a set R^*(N) ⊂ L^∞( N) such thatd_H(R^ N(N),R^*(N)) ≤σ, = 2+2+1 .Hereis defined in (<ref>) andis defined in (<ref>).Note that here we assume that δ satisfies (<ref>), σ satisfies (<ref>) with the related equations for _4, _0, etc.Proof. The proof is based on the construction of R^*(N) which satisfies (<ref>).Observe first that it follows from the proof of Lemma <ref> that, if x, y ∈ B(p, r_0/4) ∖ B(p, r_0/25) ⊂ N, thend_N(x, y) ≤r_0/2 + 8 r_0/25,so that a shortest path in N connecting x and y lies in B(p,r_0). Thus it is possible, using (<ref>), to construct an approximation r̃_x^ N: N→ that satisfies r_x^ N-r̃_x^ N_L^∞( N)≤σ,with a uniform constant , cf. (<ref>). Denote R^*_<(N)={r̃_x^ N;x ∈ B(p, r_0/4) ∖ B(p, r_0/25) }, thend_H(R^ N(B(p, r_0/4)),R^*_<(N)) ≤σ,for δ< δ_0, cf. construction of R^*_< in subsection <ref>. Next, letR^*_c={r ∈ R^*: min_z ∈ N(r(z)) ≥r_0/8} For y,z ∈ B(p, r_0/4) ∖ B(p, r_0/25) denote by d^a_N(y,z) the distance between y and z in the metric g^a along the curves lying in B(p, r_0/2) ∖ B(p, r_0/25). For each r ∈ R^*_c we define r̃^ N∈ L^∞( N): r̃^ N(y)= inf_z ∈ B(p, r_0/4)(d^a_N(z, y)+ r(z)); R^*_>(N)={r̃^ N(·):r ∈ R^*_c}.Then, with R^*(N)= R^*_<(N) ∪ R^*_>(N), we have thatd_H(R^ N(N),R^*(N)) ≤ (2+)σ=σ.Here σ error comes from an approximation of d_N(y, z), see(<ref>), and 2σ error comes from approximating d_M(z, x) and d_N(y, z) in formula (<ref>), see also (<ref>)-(<ref>) and (<ref>). At last, we use again that δ satisfies theuniformly bound(<ref>).Recall that the metric tensor gon B(p,r_0) is a representation of a metric in Riemannian normal coordinates and the C^2,α-norm of the metric is uniformly bounded. Using the fundamental equations of the Riemannian geometry, <cit.>, we have that the shape operator S of the surface B(p,r), r <r_0, can be given in the Riemannian normal coordinates centered at p in terms of the metric tensor as S=g^-1_ν g, where νis the unit normal vector ofB(p,r). Taking r=r_0/25, we seethat theC^1,α-norm of the shape operator S of N is uniformly bounded. Also, by (<ref>) the boundary injectivity radius of (N,g|_N) is bounded below by 24/25 i_0. As the sectional curvature of M and the second fundamental form (that is equivalent to the shape operator) of its submanifold N are bounded, the Gauss-Codazzi equations imply that the sectional curvature of N is bounded. As the metric tensor of M is bounded in normal coordinates in B(p,r_0), we see that the (n-1)-dimensional volume ofN= B(p,r_0/25) is bounded from below by a uniform constant. Thus by Cheeger's theorem, see <cit.>, the injectivity radius of N is bounded from below by a uniform constant.Summarising the above, the Ricci curvature of (N,g|_N) is uniformly bounded in C^α, the second fundamental form of N is uniformly bounded in C^1,α, and the diameter and injectivity radii of N and N, and the boundary injectivity radius of (N,N) are uniformly bounded. By <cit.>, using the knowledge of the set, R^*(N) of approximate boundary distance functions, which are σ-Hausdorff close tothe set, R^ N(N)of the boundary distance functionsof manifold (N,g|_N), one can construct on the setR^*(N)a new distance function d^*_N:R^*(N) × R^*(N) →_+, such thatd_GH((N,d_N),(R^*(N),d^*_N))≤( σ)^1/36,with a uniform >0. Having constructed (R^*(N),d^*_N) we can now construct an approximate metric space (M^*, d^*_M) which is ( σ)^1/36- close to (M, d_M). Indeed, let x, y ∈ N and μ[0, l], l=d_M(x, y) be a shortest between x and y. Ifμ[0, l] ⊂ N then d_M(x, y))=d_N(x, y). If, however, μ[0, l] intersects with B(p, r_0/25)then, due to the convexity of B(p, r_0/25), there are 0<s_1 <s_2<l such that μ[0, s_1] ⊂ N, μ[s_1, s_2] ⊂ B(p, r_0/25), μ[s_2, l] ⊂ N.Therefore,similar to Lemma <ref>, we obtainLet x, y ∈ N. Then d_M(x, y) =min(d_N(x, y), min_z_1, z_2 ∈ B(p, r_0/25)[d_N(x, z_1)+ d_M(z_1, z_2)+d_N(z_2, y)]). Next define, for r̃^ N_1, r̃_2^ N∈ R^*(N), d^*_M(r̃_1^ N, r̃_2^N)=min(d^*_N(r̃_1^ N, r̃_2^ N), min_z_1, z_2 ∈ B(p, r_0/25)[r̃_1^ N(z_1)+ d^a(z_1, z_2)+r̃_2^ N(z_2)])Using (<ref>) together with (<ref>), (<ref>) and(<ref>), we see thatd_GH((N, d_M),(R^*(N), d^*_M) ≤(2 +1)( σ)^1/36 ifσ≤ (σ)^1/36.Here (N, d_M) isthe manifold N with the distance function inherited from M andδ <δ_0, cf. (<ref>).Let us define the disjoint union M^*= R^*(N) ∪ B(p, r_0/25). Next wedefine a metric d^*_M on this set. To this end, consider first r̃^ N∈ R^*(N),y ∈ B(p, r_0/25). Recall, see the proof of Lemma <ref>, that the set R^*(N) is bijective with R^*_c ∪(B(p, r_0/4) ∖ B(p, r_0/25)). In the case when r̃^ N is obtained from r ∈ R^*_c, we define d_M^*(r̃^ N, y)=r(y).Moreover, in the case whenr̃^ N is obtained from x ∈ B(p, r_0/4) ∖ B(p, r_0/25), we define d_M^*(r̃^ N, y)=d^a(x, y). At last, if x, y ∈ B(p, r_0/25), we take d_M^*(x, y) =d^a(x, y). It follows from (<ref>) together with equations (<ref>), (<ref>), (<ref>) and considerations preceding Lemma <ref> thatd_GH((M^*, d^*_M), (M, d_M)) ≤ (2 +1)( σ)^1/36.Summarizing, we obtainLet R^*satisfy(<ref>) and M^*= R^*(N) ∪ B(p, r_0/25) with metric d^*_M. Then,d_GH((M, d_M), (M^*, d_M^*)) ≤σ^1/36, = (2 +1) ^1/36.§.§ Proof of Theorem <ref> and Proposition <ref>Proof of Proposition <ref>.To prove the statement of the Proposition,we collect all the previous estimates. The aim is to find the relation between the final error(i.e. d_GH((M, d_M), (M^*, d_M^*)) ≤) and the initial error δ. We proceed by following the chain ofrelations: ↦σ↦_4 ↦_0 ↦_1 ↦↦_2 ↦↦↦δ.To obtain inequality (<ref>) from (<ref>) we setσ = ( /)^36 and use it in (<ref>), (<ref>), (<ref>) and (<ref>) with Λ_s=1 to determine values of _4, _0 and _1 by setting_4 =^36n/4 ^36n, _0 =_4 ≤1/10, and _1 = ^72n =^2 /160 ^2 ^72n. To defineso that(<ref>), (<ref>), and (<ref>)are valid, we set = _1^1/b(s), = min(^-1/(2n)^-36, , r_0/32 ).Here we have used that σ = _1^1/(2n)^-1/(2n)^-36 and noticed that b(s) < 2n. From (<ref>)and(<ref>)we get_2=(_1^1/b(s))^s/(s-1)/( exp[ ( 4L_1^-1(_1^1/b(s))^-2+θ/2 exp(^-c_200_1^-c_200/b(s)) )^1/β])^s/(s-1) with _1 given by (<ref>).Finally, to chooseand δ so that(<ref>), (<ref>), (<ref>)(<ref>) are satisfied,we set=( _2/8;, 1)= ^-n 8^-n/s_1^-n/b(s)_2^-n/s,with _2 given by (<ref>), and choose δ so that δ ≤ 2^-n/2^-n()^-1_2 =2^-n/2^-n^-1^n8^ n/s_1^n/b(s)_2^1+n/s=_1^exp[- _1^-exp(_1^-) ], withC_34= 1/b(s)(n+ s+n/s-1) ,C_35= (s+n)/(s-1)( 4L C_12^-(2- θ/2))^1/β, C_36=^-c_200/β,C_37= 1/β(1+ 1/b(s)(2-θ/2)), C_38=c_200/b(s), C_39= 2^-n/2^-n^-18^ n/s^(s+n)/(s-1)+n.We use the inequality x≤exp(x)to bound from below the right hand side of theestimate above to obtain, by calling = max(C_34,C_37,C_38,1/(2n)), δ≤exp[-exp((C_39^-1+ C_35 +C_36 )_1^- ) ].Notice that (<ref>) is also satisfied,by replacingC_39 with =/(C_33^36^1/(2n)) and by including 1/(2n) in . Assuming 0< δ≤exp(-e), we get ( ^-1+ C_35 +C_36 )/ln(ln1/δ)≤_1^,Let τ_0 be the uniform constant introduced in Proposition <ref> and defineC_44 = min(1000^- , ^(^36τ_0/2)^2n ). In this way we can set in (<ref>) the two constraints (<ref>) and _1 ≤ 1/ 1000 (derived from (<ref>) with Λ_s=1)and obtain δ≤δ^*, δ^* = min( exp(-e), exp[-exp [C_44^-1( ^-1+ C_35 +C_36 )]]).Finally by using (<ref>) to rewrite _1 in (<ref>), and defining =1/(72n ) and = ( ^-1+ C_35 +C_36 )^ ^-1/(72n), we obtain (<ref>). Proof of Theorem <ref>. Let δ≤δ^* and let the ISD of M^(i), i=1,2 be δ-close.Take the finite collection𝒟 = ((B^e(r_0), g^(1)), {(_j^(1), φ_j^(1)) }_j=0^J), where the index (1) is related to the IDS of M^(1).By construction the data 𝒟_0 are δ-close to the ISD of both M^(1) and M^(2). ByProposition <ref> the metric space (M^*, d^*_M) constructed with these data is -close to both (M^(i), d^(i)), i=1,2, whereis given bythe right hand sideof (<ref>). We then conclude by triangular inequality,for any 0 <δ≤δ^*,d_GH((M^(1), d^(1)), (M^(2), d^(2))) ≤ 2 We now extend this estimate to the case δ∈ (0, exp(-e)], when δ^* < exp(-e). To this end, observe that the definition of the GH-topology and (<ref>) imply that: d_GH((M^(1), d^(1)), (M^(2), d^(2))) ≤ D, for any δ. By combining the latter inequality and (<ref>)we obtain the inequality (<ref>) with =max(2, D(ln(-lnδ^*))^). Acknowledgements RB and ML were partially supported by Academy of Finland, projects303754, 284715 and 263235. YK was partially supported byEPSRC grant EP/L01937X and Institute Henri Poincare. Table of constants C_k, c_k, τ_0 and s.Note that all constant depend on n, R, D, i_0, r_0 and variablesin brackets. NameIntroduced in / NotesNameIntroduced in / Notes Thm. <ref>Thm. <ref>Prop. <ref> Cor. <ref> Cor. <ref>(<ref>) C^(har) (<ref>) C^(Lip) (<ref>)(<ref>) Prop. <ref>Prop. <ref> Prop. <ref> Lemma<ref> (θ)see (<ref>), we use θ = 1/2(<ref>), we use=1-θ/2 c_200 (<ref>) c_205 ( θ) (<ref>), Appendixc_206(,θ) (<ref>), Appendix(<ref>) τ_0 Prop. <ref>Lemma <ref> (s) Lemma <ref> b(n)Lemma <ref> (s)Lemma <ref> s (<ref>) (s) (<ref>) (s;) Lemma <ref>(<ref>)(<ref>) (s) Lemma <ref> Lemma <ref>(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>) Lemma <ref>(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)(<ref>)§ APPENDIX§.§Calculation of c_206(,θ) in Theorem <ref> To prove Theorem <ref> we need to show that the solution w of the wave equationP(y,D) w(y)= q̃(y), y ∈Ω(T)⊂^n+1 can be estimated in the set (z,γ,T)⊂^n+1 in (<ref>) that is between two double cones of the spacetime, i.e. Σ(z,γ,T) ⊂(z,γ,T)⊂Σ(z,0,T). Here, γ>0 is the parameter that indicates how close the set (z,γ,T)is to the optimal double cone Σ(z,0,T). Theorem <ref> is proven by applying a proper iterative procedure and the dependency of the coefficient c_206 on γ is crucial for our considerations. The calculation ofc_206 can be consider as the final step of a long geometric construction. In order to understand it we summarize the previous steps with related references.In Section 3 of <cit.> we calculated the parameters of the inequality associated with a (conormally) pseudo-convex function ψ with respect to the wave operator P(y,D). Then we used this property to calculate the coefficients of the Tataru inequality (recalled in the following section <ref>)e^-ϵ |D_0|^2/2τ e^τ f u_1,τ≤ c_1,T τ^-1/2e^-ϵ |D_0|^2/2τ e^τ f P(y,D)u_0 + c_2,T e^-τ R_2^2/4ϵe^τ f u_1,τ and to prove the local stability of the unique continuation for the wave operator. In<cit.> we used the previous result to prove the global stability of the unique continuation for the wave operator. As recalled in the following section <ref>, the proof is based on the iteration N times of the a local stability for the 'low temporal frequency' component of the solution u of the wave equation:A(D_0/ω)b((y-y_j)/r)u_j_H^1≤ c_155,jexp(-c_132μ_j^α^2),∀ ω≤μ_j^α/(3c_131).Moreover in Section 3.1. of <cit.> and Appendix A of <cit.>we applied the stability result in the domain of influence of a cylinder.In the case of the present paper, the mentioned domain of influence is called Σ(z,0,T) in Theorem 2 and, according to the iterative procedure, it contains a covering of the set Λ=(z,γ,T). The balls of the covering have radius 2R, that depends on the distance to the boundary, on the regularity and pseudo-convexity property of the function ψ, and on extra constraints imposed by the Tataru inequality. The local stability step holds for smaller balls with radius r, and r < R. In Table 1 we summarize these values and in particular we obtain, up to a multiplicative constant,r∼γ^58.The ∼ symbol is defined precisely in subsection <ref>. Byconstruction and for (<ref>), one can calculate the number of local steps of the iteration (see also Table 2)N∼γ^-58(n+1).These two values combined with the calculation of the coefficients for the local and the global stability lead to the following relationship between c_206 and γ c_206= ζ_1exp(γ^-ζ_2), ζ_1, ζ_2.We will prove that formula (<ref>) plays a big role for the calculation of ζ_2.In both articles <cit.> we used consistent notations for the geometric quantities and labeled the important coefficient as c_h, with a unique h≥ 100 in order to be able to follow the construction of the final parameters. One can find themin those papers by searching for the corresponding index h.Here in this Appendix our focus is the dependency of all the parameters (in particular c_206) on the quantity γ,since this reflects the cost of getting close to the cone of dependence. For this reason in the following section <ref> we quickly introduce the main relationship between γ and the used Gevrey function localizers, and in the next sections we recalculate the main coefficients of the above results and we summarize them in the Tables 1 and 2.We will follow the same notation as in <cit.>. Unfortunately it was not possible to use an analogous notation in the rest of the present paper. Anyway we will point out the different notations. §.§.§ Gevrey functions and dependency on γ Assumption. Let α∈ [1/3,1), and let T, ℓ, γ be defined as in Assumption A5, <cit.>.(In the present paper, this corresponds to conditions (<ref>) and (<ref>)).Gevrey functions are used as smooth localizers in the constructions andtheir main properties are outlined in Section 4 of <cit.>. In particular in our calculations we consider the following Gevrey function (see <cit.>, Ex 1.4.9 for definition):χ_1(t) = χ(1+t)χ(1-t), χ(s) = exp(-s^α/α-1) ,χ(s)=0 .One can slightly modify the definition such that χ_1 = 1 in a ball B_1 ⊂(with radius 1), χ_1 = 0 outside the ball B_2 (with radius 2), and 0≤χ_1 ≤ 1. Observe that χ_1 ∈ G^1/α_0() since |D^κχ_1(v)| ≤ c_0Xc^|κ|_1X |κ|^|κ|/α,c_0X=O(1),c_1X=O(1/1-α).Here the symbol O (big-O) means “comparable up to a an absolute multiplicative constant to” (i.e. A = O(B) implies c_abs≤ A/B ≤ C_abs, for some positive numbers c_abs, C_abs). Furthermore,defineχ_δ(v) := χ_1(v/δ), v ∈^M.Hence, ℱ_v →ζχ_δ(v) = δ^M ℱ_v →δζχ_1(v) for ζ∈, and calling c_2X = 1/(eMc_1X)^α we get|ℱ_v →ζχ_δ(v)|≤ δ^M c_0Xexp(δ H_B_2(ζ)-c_2Xδ^α |ζ|^α)·(χ_1),dv). Product estimate: for v ∈ B_2(^M), calling c_0X,l,c_1X,l (resp. c_0X,m,c_1X,m) the coefficients in (<ref>) for χ_l (resp. χ_m),|D^κχ_l(v)χ_m(v)|≤c_0X,lc_0X,mmax{c_1X,l,c_1X,m} (max{c_1X,l,c_1X,m})^|κ||κ|^|κ|/α.We start by linking γand the coefficient (1-α), since both quantities tend to zero.Assumption. We assume θ = 1/2. Next, from now on the symbol ∼ means “comparable up a multiplicative coefficient independent of γ or (1-α)to”. (i.e. A ∼ B implies c ≤ A/B ≤ C, with c,C independent ofγ or (1-α)). The multiplicative coefficient is in general a uniform geometric constant, in the sense specified at the beginning of the paper. We will call c_205 the resulting multiplicative coefficient for c_206.θ is the exponent appearing in the global stability of the unique continuation (seeTheorem <ref> of this paper and the following section <ref>), while 1/α is the order of the used Gevrey functions. According to <cit.> (end of page 6469), by construction these two values are related in the following way: α^N =θα^1/r^(n+1) =1/2⇒ (1- α) ∼ r^n+1α→ 1,where N=c_170∼γ^-58(n+1) is reported in the following Table 2 andr∼γ^58 is in Table 1.Consequently, for χ_1 and c_1X defined above, we getc_1X∼1/1-α∼1/γ^58(n+1), |χ_1'|_C^0(Ω_0)∼ c_1X,|χ_1”|_C^0(Ω_0)∼ c_1X^2. §.§.§ Tataru inequality and Table 1 We consider the wave operator in ^n+1,P(y,D) = -D_0^2+ ∑_j,k=1^n g^jk(x)D_j D_k +∑_j=1^n h^j(x) D_j + q(x),wherey=(t,x)∈×^n are the time-space variables, D_0 = -i ∂_t, D_j = -i ∂_x_j. Thecoefficients g^jk∈ C^1(^n) are real and independent of time, and[g^jk] is a symmetric positive-definite matrix. Thecoefficients h^j, q∈ C^0(^n) are complex valued and independent of time.Call ξ=(ξ_0,ξ̃) the Fourier dual variable of y=(t,x). In the next theorem we use the exponential pseudodifferential operatore^-ϵ |D_0|^2/2τ v= ℱ^-1_ξ_0 → te^-ϵξ_0^2/2τℱ_t'→ξ_0v, with ℱ and ℱ^-1 representing respectively the Fourier transform and its inverse.Let us also definef(y)= ∑_|υ|≤ 2 (∂^υϕ)(y_0)(y-y_0)^υ / υ! - σ |y-y_0|^2.In following theorem (called Theorem 2.1 in <cit.>) we recall the Carleman-type estimate by Tataru, named `Tararu inequality'.(<cit.>, Theorem 2.1; or <cit.>, Theorem 2.3.) Let Ω be an open subset of ×^n. Let P(y,D) be the wave operator (<ref>), with g^jk(x) ∈ C^1(Ω), h^j, q ∈ C^0(Ω). Let y_0 ∈Ω and ψ∈ C^2,ρ(Ω) be real valued, for some fixed ρ∈(0,1), such that ψ'(y_0)≠ 0 and S={y;ψ(y)=0} being anoriented hypersurface non-characteristic in y_0. Consequently there is λ>1 such that ϕ(y)=exp(λψ) is a conormally strongly pseudoconvexfunction with respect to P at y_0. Then there is a real valued quadratic polynomial f defined in (<ref>) with proper σ > 0, and a ball B_R_2(y_0) such that f(y) < ϕ(y) when y ∈ B_R_2-{y_0} and f(y_0)=ϕ(y_0); and f being a conormally strongly pseudoconvex function with respect to P in B_R_2. This implies that there exist ϵ_0, τ_0, c_1,T, c_2,T, R, such that, for each small enoughϵ<ϵ_0 and large enough τ > τ_0, we havee^-ϵ |D_0|^2/2τ e^τ f u_1,τ≤ c_1,T τ^-1/2e^-ϵ |D_0|^2/2τ e^τ f P(y,D)u_0 + c_2,T e^-τ R_2^2/4ϵe^τ f u_1,τ.Here u ∈ H^1_loc(Ω), with P(y,D)u ∈ L^2(Ω) and supp(u) ⊂ B_R(y_0). Assumption:We now consider the`hyperbolic function' ψ(t,x;T,z) = (T - d_g(x,z))^2 - t^2introduced in Definition 3.1 of <cit.>, and its level set ψ(y) - γ^2 = 0. Starting fromageneralψ, in section 3 of <cit.>, page 180, we have already calculated all the geometric constants associated either with the related pseudo-convexity estimates of ψ or with the Tataru inequality.They are summarized in Table 1, page 191 of <cit.>, and are copied in Table A.3. of <cit.> (with few modifications explained in the related Appendix A). Then in section A.1.1. of <cit.>, page 6487, we have recalculated these quantities for the particular case ofthe `hyperbolic function' ψ in (<ref>). Our aim here is to start from Table A.3. and section A.1.1. of <cit.>in order to find the γ-dependency of those coefficients.The following new Table 1 must be read from the top to the bottom, since it starts with the basic inequalities and continues with more complicated expressions.As said, we assume ψ as in (<ref>), and calculate all coefficients in the Tataru inequality. The first two values C_l =min|ψ'(y)| and p_1=min p(y, ψ') are defined at page 6484 of <cit.>,Section A.1., Assumption b). Their limit value is calculated in <cit.>, formula (A.7): i.e.C_l = 2 γ_I b_0^-1/2,p_1=4γ_I^2. Since γ_I=γ/√(2) (see Lemma A.3.a, page 6489), and b_0 is defined as a constant (see formula (3.1), page 6452), thenthe γ-dependency of the two coefficients is respectively γ and γ^2, as shown in the table.The third value of Table 1 is dist{∂Ω_0,Ω_a} (alias dist(Λ, ∂Ω_0)) and behaves like γ^2, thanks to the estimates (A.12) and (A.11) in <cit.>. On the other hand, the following coefficients until C_3 in Table 1are independent from γ, because of formula (A.6) and (A.8) in <cit.>.The nextvalues in Table 1 are obtained by substituting the upper values: M_P, M_1,..., R_1, defined in section 3.1 of <cit.>; c_T ∼λ^3 (replacing 4n|λψ|_max,Ω_0), see (A.2) and Remark A.1 in <cit.>; τ_0, c_1,T, c_2,T, c_133 defined in section 3.2 of <cit.>; r, δ, R, defined in section 3.3 of <cit.>, here we have renamed r_0 by r.These 3 coefficients are used to prove Proposition 2.5 of <cit.>, which is related to the result of local stability for the unique continuation. c_111 of <cit.> is not used here.Note that σ, r, δ, R, τ_0, R_1, R_2 have nothing to do with quantities with the same name used in the rest of this paper (outside from the Appendix).§.§.§ Global stability coefficients and Table 2This section can be seen as an overview of the proof of Theorem <ref>, with the final estimate for c_206.We introduce the main steps and we always follow the notation of <cit.> to better follow the calculations. Assumption: Define a net of center points (t_k,z_k) for the translated hyperbolic functions:ψ(y;T_k,z_k,t_k) = (T_k-d_g(x,z_k))^2-(t-t_k)^2.Let Υ=W(z,T,ℓ) be the initial cylinder(called Γ in (<ref>)) and let Σ(z,ℓ,T) be the related domain of influence (in the paper called Σ(z,0,T) according to (<ref>)). We choose the domains for the covering Ω_0,k⊂{y; y∈ [-T_k +t_k, T_k+ t_k ]×^n; ψ(y;T_k,z_k,t_k) ≥γ_k^2/2,T_k≥ d_g(x,z_k)}and Λ_k⊂{y; y∈ [-T_k +t_k, T_k+ t_k ]×^n; ψ(y;T_k,z_k,t_k) ≥γ_k^2,T_k≥ d_g(x,z_k)}.Let γ_k ≥γ, for all k. The construction is similar to the one in Figure 1, page 6470 of <cit.>. The parameters (t_k,z_k,T_k,γ_k) should be chosen such that the x-projection of Ω_0,k iscontainedin the domain 0 < d_g(z_k,x) ≤7/8i_0, that is within the injectivity radius i_0, in order to guarantee the C^2,ρ-regularity of ψ(y;T_k,z_k,t_k). Moreover the union Λ= ⋃_k=1^K Λ_k should cover a subset of the domain of influence Σ(z,ℓ,T).For example, let Λ = S(z,ℓ,T,γ), (alias (z,γ,T)in (<ref>)).The above construction together with the assumption on the Gevrey-regularity of the localizerslet us apply Theorem 1.2 in <cit.>. The details of Assumptions A2-A3 can be checked in the paper, while P is defined in (<ref>).(<cit.>, Theorem 1.2) Under the conditions of Assumptions A2-A3,define the open set Ω_1 = ⋃_k=1^K Ω_0,k\Υ containing Λ.Then for every 0<θ < 1 we haveu_L^2(Λ)≤ c_161u_H^1(Ω_1)/(ln(1+u_H^1(Ω_1)/Pu_L^2(Ω_1)))^θ.Moreover, for any m ∈ (0,1] we getu_H^1-m(Λ)≤ c_161^m u_H^1(Ω_1)/(ln(1 + u_H^1(Ω_1)/Pu_L^2(Ω_1)))^m θ .The constant c_161 is calculated in the proof.Up to a uniform multiplicative constant (and according to Remark 3.8. of <cit.>), we can identify the constant c_161 with our final constant c_206,even if the first one is defined for a bounded domain of the Euclidean space and the second one is defined for a compact manifold (M,g). Indeed by assumption, in each chart of M holds the inequality a_0 I≤ [g_jk(x)]_j,k=1^n≤ b_0I,andg_jk_C^4(M)≤ b_3, a_0<1<b_0,which let one approximate all spatial subdomains to an Euclidean ball.Theorem 1.2 is a generalization of Theorem 1.1 in <cit.> for a more complex domain, but with a similar final estimate where the inverse-log term has a different multiplicative constant. For eachΩ_0,k Theorem 1.1 in <cit.> holds with constant c_160 in place of c_161. The number K of the used setsΩ_0,k is by construction proportional to the number of charts covering the domain. This number depends on the bounds for the diameter, the injectivity radius and the harmonic radiusof M, called respectively D, i_0 andr^(har) in the notation of the paper. Hence we can also write c_206∼ c_161∼ c_160.The technique used to prove the above Theorem consists in iterating the local stability result, but considering the low temporal frequencies separately from the high temporal frequencies. The pseudodifferential operator A(D_0/ω) defined below is used to localize the low temporal frequencies of the solution u, where the estimate is more complicated.Assumption: We consider a pseudo-differential operator A(D_0)with symbol a(ξ_0) ∈ G^1/α_0(), 0≤ a≤ 1, supported in |ξ_0|≤ 2 and equal to one in |ξ_0|≤ 1. Hence we can write A(β |D_0|/ω)v = ℱ^-1_ξ_0 → ta(β |ξ_0|/ω)ℱ_t'→ξ_0v.We fix a as in (<ref>).Another complication comes from the fact that the local stability result holds just in small balls B_r(y_j), centered in y_j with radius r. It is important for our estimates that the balls B_r(y_j), with j=1,…, N,cover the set Λ.We will choose the center points y_j in the set E, so that the union of the balls is contained in the domain of influence of the cylinder Υ, i.e. ⋃_k=1^N B_r(y_j) ⊆⋃_k=1^K Ω_0,k⊂Σ(z,ℓ,T).Furthermore there are particularconditions on the support of u to be fulfilled,also affecting the set E and the iteration. Hence in the final domain ⋃_k=1^K Ω_0,k the local stability result must be applied several times to a sequence {u_j}_j=2^ N of proper cut-offs of the solution u. Let u_j be defined as:u_j = ∏_k=1^j-1(1-b_k)u,b_k:=b(2(y-y_k)/r). Then we can introduce the following Theorem 2.7 in <cit.>, formulating a local stability estimate (of the unique continuation for the wave operator) of inverse exponential type for the low temporal frequencies of u_j. The exact construction of the radii r and R is in Proposition 2.5 of <cit.>, as intersection of several geometric and analytic constraints. The γ-dependency of r and R is shown in Table 1.In particular we get r ∼γ^58. The number of balls used in the iteration is N=c_170∼γ^-58(n+1), as shown in Table 2. The constant c_170 is defined in formula (2.5) of <cit.>. Notice that at each step we reduce the support of the temporal localizer A(D_0), by defining the term μ_j = c_156μ_j-1^α.We will show thatc_155,N∼γ^- ζ_4 c_155,1∼γ^- ζ_5,and that c_161∼ N γ^- ζ_6c_156^-α/(1-α), for proper positive numbers ζ_4, ζ_5, ζ_6. The details of Assumptions A1-A2-A4 and of the set E can be checked in the paper.(<cit.>, Theorem 2.7) Under the Assumptions A1-A2-A4, let y_k ∈ Eand let b∈ G^1/α_0(^n+1) be a Gevrey functions of class 1/α with compact support, such that 0<α<1. Then, there exist constants R,r with R ≥ 2r >0, and c_159>1such thatfor μ > c_159 there are coefficients c_151, c_152, c_154,c_155,c_156, β, N for which,ifu_H^1(Ω_1)= 1,Pu_L^2(Ω_1)< 1,A(D_0/βμ)l(y)Pu_L^2≤exp(-μ^α),then callingμ_1=μ and μ_j = c_156μ_j-1^α for 2 ≤ j≤ N, we have μ_j ≥ 1 andu_j_H^1(B_2R(y_j))≤ c_152,Pu_j_L^2(B_2R(y_j))≤ c_153,A(D_0/μ_j)b((y-y_j)/R)Pu_j_0 ≤ c_154,jexp(-μ_j^α),and consequently A(D_0/ω)b((y-y_j)/r)u_j_H^1≤ c_155,jexp(-c_132μ_j^α^2),∀ ω≤μ_j^α/(3c_131).The radii r and R are defined in Table A.3, while the coefficients c_k are calculated in the proof of the Theorem. In the following Table 2 we show the γ dependency of the coefficients used in the proof of Theorem 1.2 and Theorem 2.7 in <cit.>.The coefficients c_h of the local stability are defined in <cit.> and recalled also in the proof of Lemma 2.6. of <cit.>, page 6459. As said, the index h is unique and here we briefly remind the definition of c_h and the relationship with other coefficients and with Table 1. In (<ref>) we obtained the γ dependency forc_1X, i.e.c_1X∼γ^-58(n+1). It follows that c_2X∼ 1/(c_1X)^α, where c_2X is the coefficient in (<ref>) (it was called c_102 in <cit.>).Therefore for simplicity we give belowthe values in Table 2 in terms of their c_1X orγ dependency. In order to calculate the rest we need to refine some estimates.First of all we recall and improve the coefficients in Lemma 2.1, <cit.>, for the L^2 and H^m norms:c_107=c_3 (8/β_1Γ(1/α)1/α (c_117)^1/α)^1/21/(α c_106)^1/α,c_108=c_107(1+|D_x^m f|_C^0) + c_107(1+m)^(m+1)/α/(α c_106)^m/α A(β_1 D_0/μ)f (1-A(D_0/μ))v_1 ≤ c_108 e^-c_106μ^αv_(f)_m .Next, following Remark 2.8 (4) in <cit.>, we split each smooth Gevrey localizer in time and space:b(y-y_0/R)=b(t-t_0/R)b(x-x_0/R),with b(t) = χ_1(t) ∈ G^1/α_0() (as in (<ref>)) and b(x) ∈ C^2_0(^n). Consequently the functions f_1(y),f_2(y),f_3(y) (see formula (2.21) in <cit.>) can generally be written as: f_*(y)=f_*(t)f_*(x), with f_*(t) = D_0^2 b_j-1(t)+D_0 b_j-1(t)+b_j-1(t) and f_*(x) = D_rD_s b_j-1(x)+D_r b_j-1(x)+b_j-1(x), for b_j-1(t) := b(2(t-t_j-1)/r). Let v = b((y-y_j-1)/r)u_j-1, thenA(3D_0/ν)f_*(t)(1-A(D_0/ν))f_*(x)v_1≤A(3D_0/ν)(D_0 f_*(t))(1-A(D_0/ν))f_*(x)v_0+ A(3D_0/ν)f_*(t)(1-A(D_0/ν))(D_0+D_x+1)(f_*(x)v)_0≤ c_108 c_152exp(-c_106ν^α)with c_108 calculated as in (<ref>) with β_1=3, m=3, c_3=(r/2) c_0X. Moreover, we canrecalculate the terms at page 6466 of <cit.>:c_162,j =2 c_162,j-1+ c_153c_164+c_155,j-1 |-P_2 b_j-1 + h^s(x)D_x_sb_j-1|_C^0 +c_107 c_152(1+ n^2|g^kr|_C^0+ |h^s|_C^0) + c_155,j-1 |2D_0 b_j-1|_C^1 + c_152 c_108 + c_155,j-1|D_0(2D_0 b_j-1)|_C^0+ c_152c_107 +c_155,j-1 |2n g^krD_k b_j-1|_C^1+c_152 c_108 n^2|g^kr|_C^1 + c_155,j-1 |D_r(2g^kr D_k b_j-1)|_C^0+c_107 c_152n^2|g^kr|_C^1c_162,j ∼2 c_162,j-1 + ( N^2 c_1X^2/r^2)c_1X^3/2/r^1/2 + c_155,j-1(1+ |g^kr|_C^1+|h^s|_C^0)(|b'|_0/r+|b”|_0/r^2+|b'|^2_0/r^2)+ + ( N c_1X/r) c_108 (1+ |g^kr|_C^1+|h^s|_C^0)∼c_162,j-1 + c_155,j-1c_1X^2/r^2c_154,j = c_162,j + c_153c̃_107∼ c_162,j + N^2 c_1X^3/r^2R^n∼ c_162,j∼c_155,j-1c_1X^2/r^2c_116 ∼ γ^4 c_154,j^2 (N c_1X/γ^48)^4.By applying Lemma 2.6 in <cit.> with c_U=c_152, c_P=c_153, c_A=c_154,j, one obtains:c_155,j=c_150(c_152,c_153,c_154,j) ∼ c_1X^3√(c_116)/γ^48∼N^2 c_1X^5/γ^46+58· 2c_155,j-1,c_156 =min( 1/18β c_131,c_132^1/α,c_165^1/α, c_106^1/α/3c_131)= c_106^1/α/3c_131∼γ^56α + 58(n+1)(α+1)+28.Now we can obtain the γ dependency of c_160 in Theorem 1.1 of <cit.>.Recalling (<ref>) (i.e. α^N = θ = 1/2 and (1-α) ∼γ^58(n+1)),we get c_159 = c_156^-1/α^N-1(1-α) > 1 and therefore:c_159∼(1/γ^56α + 58(n+1)(α+1)+28)^1/2γ^58(n+1)=exp(-[56α + 58(n+1)(α+1)+28]/2γ^58(n+1)ln (γ)),c_158 =N c_155,N+ 3N c_131 c_152(1+ |b'|_C^0/r)c_156^-α/(1-α)∼ N c_131 c_152c_159^1/2,c_160 = (ln (1+ e^c_159))^1/2+2^1/2 c_158∼ c_158.Hence, c_160 of Theorem 1.1 in <cit.> (and analogously c_161 in Th. 1.2.) fulfills the estimatec_160≤exp(1/γ^c_200), c_200 = 58(n+1) + 2. We know that c_206∼ c_160.We denote byc_205 the uniform multiplicative constant that depends on the uniform geometric parametersT,i_0,D,r_0,R,n, named according to the notation of the rest of the paper. The number c_205 also depends on θ, that for simplicity has been fixed here equal to 1/2.The above inequality gives an estimate for c_160∼ c_206, and thus wecan conclude that c_206(,θ)=c_205 ( θ)exp(γ^-c_200). Remark. Please notice that there was a misprint in the paper <cit.> both in the statements of Theorem 3.3. and in Corollary 3.9.However this misprint did not affect the calculations of the present paper (or the results in <cit.>). Namely in Theorem 3.3., we have the following erratum (in the denominator of the final inequality):u_L^2(Λ)≤c_163u_H^1((Ω_1)/(ln(e + u_H^1((Ω_1)/f_L^2(Ω_1)))^θ.And this is the corresponding corrigendum (replace e with 1):u_L^2(Λ)≤c_163u_H^1((Ω_1)/(ln(1 + u_H^1((Ω_1)/f_L^2(Ω_1)))^θ.In Corollary 3.9., we have erratum in (3.27), and corrigendum:w_L^2(Ω_2\ W_1)≤c_166w_H^1((Ω_1\ W_0)/(ln(1 + w_H^1((Ω_1\ W_0)/C' w |_W_1_H^1(W_1)))^θ.Table 1 and Table 2. We next present the two tables that summerize the previous calculations. They show the γ dependency of the parameters. The name of the constants there is unique.The order of the parameters in Table 1 is always increasing in complexity, that is the parameters down may depend on the upper ones. In general the same principle is followed also in Table 2, even if the relationships are more complex. For simplicitythe values in Table 2 are expressed in terms of their c_1X orγ dependency, where we recall thatc_1X∼γ^-58(n+1).Table 1NameOrder with respect to γ C_l ∼ γ (<cit.>, formula (A.7)) p_1 ∼ γ^2 (<cit.>, formula (A.7)) dist{∂Ω_0,Ω_a} ∼ γ^2 (<cit.>, formula (A.12)) |ψ'|_C^k ∼ 1 (<cit.>, formula (A.8)) d_g(x,z) ∈ [ℓ, T-γ] (in Γ\ cylinder) |∂_k d_g| ∼ 1 (<cit.>, formula (A.6)) C_3 ≥ 1 M_P ≤1 M_1 ≥ 1/(p_1)^2 = 1/γ^4 M_2 ≥ M_1= 1/γ^4 λ ≥ max{M_1,e,1/C_l^2}= 1/γ^4 ϕ_0 ≥ e^-1 ϕ_M ≤ e R_1 ≤ min{1, γ^2, 1/λ}=γ^4 c_T ∼ λ^3=1/γ^12(<cit.>, formula (A.2)and Remark A.1) c_100 ≥ 1 ϵ_0 ≤ 1/(λ (1 + λ )+ c_T)=1/λ^3=γ^12 R_2 ≤ min{R_1,C_l/ (1 + λ + c_T/λ ), λ^2C_l^2/c_T,(1/c_T^2M_1(1+λ^2))^1/4, ϵ_0/√(2M_2), λ/c_T (1+λ^2+λ^2 (1 + λ) )},=min{γ^4, γ^9,γ^6, γ^9, γ^14, γ^20}=γ^20 σ ≥ c_T R_2=γ^8(<cit.>, formula (A.2) and Remark A.1) τ_0 ≥ M_1( (λ^2 +c_T R_2)^2+ |h|^2_C^0(Ω_0)(1+(λ+ c_T R_2^2)^2)+ |q|^2_C^0(Ω_0)) = 1/γ^20 R ≤ R_2 = γ^20 δ ≤ c_T R_2^3=γ^48 r ≤ λ^2 C_l^2 R_2^3/( λ+c_T R_2^2 ) = γ^58 c_1,T ≥ √(( M_1/τ_0 + 1/λ))=γ^2 c_2,T ≥ √(M_2)(1+|χ_1'|_C^0(Ω_0)/τ_0 R) + c_1,T/√(τ_0) c_133=1/γ^2+1/γ^8(|χ_1”|_C^0(Ω_0)+|χ_1'|_C^0(Ω_0)/γ^4 ) ∼c_1X^2/γ^8 c_133 ≥ |χ_1”|_C^0(Ω_0)/τ_0 R^2 + |χ_1'|_C^0(Ω_0)/R(1+λ+ c_T R_2^2+|h|_L^∞(Ω_0)/τ_0) =1/γ^20(|χ_1”|_C^0(Ω_0)+|χ_1'|_C^0(Ω_0)/γ^4 )Table 2 Name Value Name Valuec_2X =c_102=1/(e c_1X)^α c_119 δ c_1X∼γ^48c_1X c_118 1 + |ϕ'|_0(1+R_2) +5n|ϕ”|_0,ρR_2^ρ+1 + |ϕ”|_0(1+R_2^2) +σ(2+R_2^2)∼1/γ^8 c_114 c_1,T^2 |g|^2_C^1|χ_1|^2_C^2(1+|φ'|^4_C^0/δ^4+|φ”|^2_C^0/δ^2)∼c_1X^4/γ^12+48· 4 c_115 c_2,T^2(|φ'|^2_C^0 +1)(3^3 e^-3/δ^3)(1+|χ_1'|^2_C^0/δ^2) ∼c_1X^6/γ^8· 3+48· 5 c_121 c_1X/δ c_122 c_1X^2/γ^44 c_123 ∼γ^56 ·α/c_1X^α c_128 1/3^α2c_123∼ c_123 c_110 c_122(8Γ(1/α)/3[α c_123^1/α(α c_128)^1/α])^1/2∼ c_1X^3/γ^44+56 c_109 min(√(ϵ δ/36), c_128/2, 1)∼γ^56 ·α/c_1X^α c_130 3 c_109/4 δ(1/16 )^5∼γ^56 ·α - 48/c_1X^α c_131 (16^6√(2), 16^6 3^α-1√(2ϵ_0 δ)/c_123, 16^6 √(ϵ_0 δ)/3√(2))∼c_1X^α/γ^56·α - 30 c_135 r^α c_2X1/4 · 3^α∼γ^58·α/c_1X^α c_137 min(1/2(c_102δ^α(c_130)^α/(√(2))^α +δc_130/2 √(2)), 1/2 c_102δ^α(1/2√(2)c_130)^α)∼γ^48·α/c_1X^αc_130^α c_132 min(c_135,c_137) ∼γ^56 ·α·α/c_1X^α·α1/c_1X^α c_170 N ∼1/γ^58(n+1) c_117 (r/2)^α1/(ec_1X)^α∼r^α/c_1X^α c̃_117 c_2X R^α = (e c_1X)^-αR^α β 2 + (4/c̃_117)^1/α∼c_1X/R (2.11) c̃_106 1/β^α∼R^α/c_1X^α c̃_107 R^n+1 c_0X(8/βΓ(1/α)1/α (c̃_117)^1/α)^1/21/(αc̃_106)^1/α∼ R^nc_1X c_154,1 1 + c̃_107∼ R^nc_1X c_155,1 max(c_134, c_136) = max(c_1X^2.5γ^58(n-3/2) , c_1X^6/γ^180)=c_1X^6/γ^180 c_153 1+2N(1+ n^2|g^kr|_C^0 + |h^s|_C^0)(|b'|_C^0/r + |b”|_C^0/r^2+ (N-1)|b'|^2_C^0/r^2) ∼N^2 c_1X^2/r^2 c_152 2(1+ N |b'|_C^0/r) ∼Nc_1X/r c_162,1 1 c_156 ∼c_106^1/α/3c_131∼γ^56α + 58(n+1)(α+1)+28 c_165 c_117β^α/(3^α 4) ∼r^α/R^α∼γ^38α c_164 r/2 c_0X(8/3Γ(1/α)ec_1X/α^1/α (r/2))^1/2e c_1X (3^α 4)^1/α/(α^1/α (r/2))∼c_1X^3/2/r^1/2 c_107 c_164∼c_1X^3/2/r^1/2 c_108 (c_107 + c_1074^4/α/(α c_106)^3/α) (1+|b'|_0/r+|b”|_0/r^2+|b”'|_0/r^3) (1 + |b'|_0/r) ∼c_1X^17/2/r^15/21 Al Alessandrini G.Stable determination of conductivity by boundary measurements, Appl. Anal.,27 (1988), 153–172. AlS Alessandrini G., Sylvester J.Stability for a multidimensional inverse spectral theorem. Comm. Part. Diff. Eq.15 (1990),711–736.And Anderson M.Convergence and rigidity of manifolds under Ricci curvature bounds, Invent. Math.,102 (1990), 429-445.AKKLT Anderson M., Katsuda A., Kurylev Y., Lassas M., Taylor M. Boundary regularity for the Ricci equation, Geometric Convergence, and Gel'fand's Inverse Boundary Problem,Invent. Math. 158 (2004), 261-321.Be Belishev, M.An approach to multidimensional inverse problems for the wave equation. (Russian) Dokl. Akad. Nauk SSSR,297 (1987), 524–527BeKu2Belishev, M., Kurylev, Y. A nonstationary inverse problem for the multidimensional wave equation "in the large".(Russian) Zap. Nauchn. Sem.LOMI,165 (1987), 21–30.BeKu Belishev M., Kurylev Y.To the reconstruction of a Riemannian manifold via its spectral data (BC-method), Comm. Part. Diff. Eq.,17 (1992), 767-804.BeBeG Berard P., Besson G., Gallot S.Embedding Riemannian manifolds by their heat kernel, Geom. Funct. Anal.,4 (1994), 373-398.BL Bergh J., Löfström, J.Interpolation spaces. An introduction. Springer-Verlag, 1976,pp. x+207,Bl1 Blagoveščenskii, A. A one-dimensional inverse boundary value problem for a second order hyperbolic equation. (Russian) Zap. Nauchn. Sem. LOMI, 15 (1969), 85–90.B Bosi R., Kurylev Y.,Lassas M.Stability of the unique continuation for the wave operator viaTataru inequality: the local case, Journal d'Analyse Mathematique, Vol. 134 (2018), 157 – 199.BKL Bosi R., Kurylev Y., Lassas M.Stability of the unique continuation for the wave operator via Tataru inequality and applications, J. Differential Equations, 260, 8, (2016), 6451-6492.BuBuI Burago D., Burago Y. and Ivanov S.A Course in Metric Geometry. AMS, Providence (2001).ChavelChavel I. Riemannian geometry. A Modern Introduction, 2nd ed, 2006.Ch Cheeger J., Finiteness theorems for Riemannian manifolds.Am. J. Math. 92 (1970), 61–75.DaviesE. Davies,Spectral Properties of Compact Manifolds and Changes of Metric.American Journal of Mathematics 112(1990),15-39.dH1 de Hoop M.,Holman, S., Iversen, E., Lassas, M., Ursin B.Recovering the isometry type of a Riemannian manifold from local boundary diffraction travel times. J.Math. Pures et Appl.103 (2015), 830-848 dH2 de Hoop M.,Holman, S., Iversen, E., Lassas, M., Ursin B.Recovery of a conformally Euclidean metric from local boundary diffraction travel times. SIAM Journal on Mathematical Analysis46 (2014), 3705-3726DKSaU Dos Santos Ferreira D.,Kenig C., Salo M., and Uhlmann G. Limiting Carleman weights and anisotropic inverse problems, Invent. Math. 178 (2009), 119–171.FIKLN Fefferman C., Ivanov S., Kurylev Y., Lassas M., Naranayan H.Reconstruction and interpolation of manifolds I: The geometric Whitney problem. Preprint arXiv:1508.00674Ge Gelfand I.Some aspects of functionalanalysis and algebra. 1957 Proceed. the Intern. Congr. Mathem., Amsterdam, 1954,1, 253–276.GW Greene, R., Wu, H.Lipschitz convergence of Riemannian manifolds. Pacific J. Math.131 (1988),119–141.Gr Gromov M. with appendices by Katz M., Pansu P. and Semmes S., Metric Structures for Riemannian and Non-Riemanian Spaces, based on `Structures metriques pour les varietes riemanniennes', (LaFontaine J. and Pansu P. eds), Birkhauser (1999).GS Guillarmou C., Sa Barreto A. Inverse problems for Einstein manifolds, Inverse Probl. Imaging 3 (2009),1–15.Ivanov Ivanov, S.Distance difference representations of Riemannian manifolds. arXiv:1806.05257Helin Helin, T., Lassas T., Oksanen L., Saksala, T. Correlation based passive imaging with a white noise source.J.Math. Pures et Appl.116 2018,132–160.H1Hörmander L. The analysis of linear partial differential operators I.Springer-Verlag, 1985, viii+525 pp.KasKum Kasue A., Kumura H.Spectral convergence of Riemannian manifolds, Tohoku Math. J.,34446 (1994), 147-179.KasKum2 Kasue A., Kumura H./it Spectral convergence of Riemannian manifolds. II. Tohoku Math. J.48 (1996), 71-120.KaKuLa Katchalov A., Kurylev Y., Lassas M. Inverse Boundary Spectral Problems, Chapman/CRC, Boca Raton (2001).Ka Kato T. Perturbation Theory for Linear Operators, Springer, Berlin (1995).Katsuda Katsuda, A. Gromov's convergence theorem and its application. Nagoya Math. J.100 (1985), 11–48.KatsudaKuLa Katsuda A., Kurylev Y., Lassas M.Stability of boundary distance representation and reconstruction of Riemannian manifolds.Inverse Problems and Imaging 1 (2007), 135–157.Ku1 Kurylev Y. Multidimensional Gel'fand inverse boundary problem and boundary distance map. In: Inv. Probl. Related to Geom. (H. Soga, ed.), 1-15, Ibaraki Univ.Press, Mito, 1997.KLY Kurylev Y., Lassas M., Yamaguchi T. Uniqueness and Stability in Inverse Spectral Problems for Collapsing Manifolds. Preprint arXiv:1209.5875.KrKL Krupchyk K., Kurylev Y., Lassas M.Inverse spectral problems on a closed manifold. J. Math. Pures Appl.90 (2008), 42–59.LaULassasM.,Uhlmann G.Determining Riemannian manifold from boundary measurements.Ann. Sci. École Norm. Sup. 34 (2001), 771–787.LaLe Laurent C., Leautard M.Quantitative unique continuation for operators with partially analytic coefficients. Application to approximate control for waves. (2015), arXiv:1506.04254. To appear in JEMS.MandacheMandache N.Exponential instability in an inverse problem for the Schrodinger equation. Inverse Problems,17 (2001):1435.NSU Nachman, A. Sylvester, J., Uhlmann, G. An n-dimensional Borg-Levinson theorem.Comm. Math. Phys.115 (1988), 595–605Nv1 Novikov, R. A multidimensionalinverse spectral problemfor the equation-Δψ +(v(x)-Eu(x))ψ=0.(Russian) Funk. Anal. i Prilozhen.22 (1988), 11–22. ONeill O'Neill, B.Semi-Riemannian geometry. Pure and Applied Mathematics, 103. Academic Press,1983. xiii+468 pp.Pe Petersen P. Riemannian Geometry, 1st Ed., Springer, New York (1998).Robbiano Robbiano, L.Fonction de cout et controle des solutions des equations hyperboliques.Asymptotic Anal.10 (1995),95–115. RDRodino L. Linear Partial Differential Operators in Gevrey Spaces, World Scientific, (1993).StU Stefanov, P., Uhlmann, G.Stability estimates for the hyperbolic Dirichlet to Neumannmap in anisotropic media. J. Funct. Anal.154 (1998), 330–358.SU Sylvester J., Uhlmann G.A global uniqueness theorem for an inverse boundary value problem.Ann. of Math. (2)125 (1987), 153–169.Ta Tataru D.Unique continuation for solutions to PDE's: between Hormander's theorem and Holmgren's theorem, Comm. Part. Diff. Eq.,20 (1995), 855-884.Ta1 Tataru D.Carleman estimates, unique continuation and applications, preprint.
http://arxiv.org/abs/1702.07937v4
{ "authors": [ "Roberta Bosi", "Yaroslav Kurylev", "Matti Lassas" ], "categories": [ "math.AP", "math.DG", "35R30, 58J50" ], "primary_category": "math.AP", "published": "20170225190508", "title": "Reconstruction and stability in Gel'fand's inverse interior spectral problem" }
1Department of Physics, The University of Tokyo,Tokyo, 113-0033, Japan 2Department of Physics, Tokyo Metropolitan University,Tokyo 192-4397, Japan 3Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA 4Department of Earth and Planetary Science,The University of Tokyo, Tokyo 113-0033, Japan 5Research Center for the Early Universe, School of Science, The University of Tokyo, Tokyo 113-0033, Japan 6NASA Sagan Fellowaizawa@utap.phys.s.u-tokyo.ac.jp= 13ptDetection of a planetary ring of exoplanets remains as one of the most attractive but challenging goals in the field.We present a methodology of a systematic search for exoplanetary rings via transit photometry of long-period planets. The methodology relies on a precise integration scheme we develop to compute a transit light curve of a ringed planet. We apply the methodology to 89 long-period planet candidates from the Kepler data so as to estimate, and/or set upper limits on, the parameters of possible rings. While a majority of our samples do not have a sufficiently good signal-to-noise ratio for meaningful constraints on ring parameters, we find that six systems with a higher signal-to-noise ratio are inconsistent with the presence of a ring larger than 1.5 times the planetary radius assuming a grazing orbit and a tilted ring.Furthermore, we identify five preliminary candidate systems whose light curves exhibit ring-like features. After removing four false positives due to the contamination from nearby stars, we identify KIC 10403228 as a reasonable candidate for a ringed planet. A systematic parameter fit of its light curve with a ringed planet model indicates two possible solutionscorresponding to a Saturn-like planet with a tilted ring. There also remain other two possible scenarios accounting for the data; a circumstellar disk and a hierarchical triple. Due to large uncertain factors,we cannot choose one specific model among the three.AJ Acta Astron. ARA&A ApJ ApJ ApJS Appl. Opt. Ap&SS A&A A&A Rev. A&AS AZh BAAS Bull. astr. Inst. Czechosl. Chinese Astron. Astrophys. Chinese J. Astron. Astrophys. Icarus J. Cosmology Astropart. Phys. JRASC MNRAS MmRAS New A New A Rev. PASA Phys. Rev. A Phys. Rev. B Phys. Rev. C Phys. Rev. D Phys. Rev. E Phys. Rev. Lett. PASP PASJ QJRAS Rev. Mexicana Astron. Astrofis. S&T Sol. Phys. Soviet Ast. Space Sci. Rev. ZAp Nature IAU Circ. Astrophys. Lett. Astrophys. Space Phys. Res. Bull. Astron. Inst. Netherlands Fund. Cosmic Phys. Geochim. Cosmochim. Acta Geophys. Res. Lett. J. Chem. Phys. J. Geophys. Res. J. Quant. Spec. Radiat. Transf. Mem. Soc. Astron. Italiana Nucl. Phys. A Phys. Rep. Phys. Scr Planet. Space Sci. Proc. SPIE§ INTRODUCTIONAs is the case of the Solar system, moons and planetary rings are believed to exist in exoplanetary systems as well. Their detection, however, has not yet been successful, and remains as one of the most attractive, albeit challenging, goals in exoplanetary sciences. A notable exception includes a system of giant circumplanetary rings of J1407b <cit.>, but the inferred radius ∼ 1 AU implies that it is very different from Saturnian rings that we focus on in the present paper.In addition to the obvious importance of the ring discovery itself, its detection offers an interesting method to determine the direction of the planetary spin because the ring axis is supposed to be aligned with the planetary spin as in the case of Saturn.Thus the detection of ring parameters yield a fairly complete set of dynamical architecture of transiting planetary systems; the stellar spin via asteroseismology <cit.> and gravity darkening <cit.>, the planetary orbit via transiting photometry and the Rossiter-McLaughlin effect <cit.>, and the planetary spin through the ring detection as discussed here.The direct detection of planetary spin is very difficult, and so far only four possible signals related to planetary spins have been reported: the periodic flux variations of 2M1207b <cit.>, and a rotational broadening and/or distortion of the line profile of β Pictoris b <cit.>,HD 189733b <cit.>, and GQ Lupi b <cit.>.These interesting planets are very young and have sufficiently high temperature (>1600 K) for their spin to be detected.In contrast, the same technique is not easily applicable for mature and cold planets like Saturn. Thus the detection of the ring axis provides a complementary methodology to determine the spin of more typical planets with low temperature.Since the total mass of planetary rings is small, they do not exhibit any observable signature on the dynamics of the system.Instead, high-precision photometry and spectroscopy offer a promising approach towards their detection, and, observations of reflected light and transit are especially useful for this purpose. Possible signatures in reflected light due to the planetary rings include the higher brightness, the characteristic phase function, distinctive spectral variations, temporary extinction of the planet, and discrepancy between reflection and thermal radiation intensities <cit.>.For instance, <cit.> attempted to explain the line broadening of the reflected light of 51 Peg b with a ringed planet model, and they conclude that it is not due to the ring. This is because their solution requires a non-coplanar configuration, which would be unlikely for short-period planets.Searches for rings through the light reflection of the host star can be made for non-transiting planets even though their signals are typically small. Therefore, while we focus on the transit photometry in the rest of this paper, the reflected-light method is indeed useful and complementary as well. <cit.> is the first to propose the transit photometry as a tool for the ring detection.<cit.> derived the upper limit on the radius of a possible ring around HD 209458 b. <cit.> improved the model of<cit.> by incorporating the influence of diffraction on the light curves. They claimed that the Saturn-like ring system can be detected with the photometric precision of the Kepler mission. <cit.> pointed out that the combination of the transit photometry and the spectroscopic Rossiter-McLaughin effect increases the detection efficiency and the credibility of the signal. <cit.> proposed that an anomalously large planet radius indicated from transit photometry can be used to select candidates for ringed planets. They also proposed that theanomalous stellar density estimated from the transit may beused as a probe of a ring. In addition to the above methodology papers, a systematic search for ring systems using real data was conducted by <cit.>. They analyzed 21 short-period planets (P≤ 50 days) in the Kepler photometric data, and found no appreciable signatures of rings around the systems.This is an interesting attempt, but their null detection is not surprisingbecause the ring tends to be unstable as the planet gets closer to the central star.In addition, <cit.> demonstrated that it is hard todetect the ring at below 0.1 AU in the case of solar-like stars. Instead, we attempt here a systematic search for rings around long-period planet candidates that exhibit single or a few transit-like signals in the Kepler photometric data. Since rings around those planets, if exist, should be dynamically stable, even a null detection would eventually put an interesting constraint on the formation efficiency and properties of icy rings for those plantetary environments.The purposes of the paper are three-fold; to establish a methodologyfor the discovery of potential ringed planets, to apply the methodology to a catalog of long-period planet candidates from Kepler, and to detect and/or constrain the possible ringed planets. Section 2 presents our simple model of a ringed planet, and describes the expected transit signal.In Section 3, we explain how to select target objects for our search, and classify them into four groups according to the amplitude and nature of the signal-to-noise ratio of their light curves relative to the expected signature by possible ringed planets.In Section 4, we place upper limits on ring parameters for seven systems with a good signal-to-noise ratio.In Section 5, we select five tentative ringed-planet candidates from the high signal candidates classified in Section 3. While four out of the five are likely to be false positives, one system, KIC 10403228, passes all the selection criteria that we impose.Therefore we attempt a systematic parameter survey for the possible ring around KIC 10403228in Section 6. Also we examine and discuss various other possibilities that may explain the observed ring-like anomaly.Final section is devoted to conclusion and future prospects. § A SIMPLE MODEL FOR A RINGED PLANET §.§ Basic parameters that characterize a ringed planet system Our simple model of a ringed planet adopted in this paper basically follows <cit.>.The ring is circular, and has a constant optical depth τ everywhere between the inner and outer radii of R_ in and R_ out. We denote the radii of the star and planet by R_⋆ and R_ p.The configuration of the planet and ring during a transit is illustrated in Figure <ref>. The X-axis is approximately aligned withthe projected orbit of the planet on the stellar disk, and theZ-axis is towards the observer. This completes the (X,Y,Z) coordinate frame centered at the origin of the ringed planet (left panel in Figure <ref>). The normal vector of the ring plane is characterized by the two angles θ and ϕ in a spherical coordinate (right panel in Figure <ref>). We also set up another coordinate system (x,y,z) centered at theorigin of the star in such a way that the major and minor axes of the projected ring are defined to be parallel with x- and y-axes, respectively,with z-axis being towards the observer.The ring is assumed to move along the planetary orbit with constant obliquity angles (θ, ϕ), and the planet is assumed to move on a Keplerian orbit around the star.The left panel in Figure <ref>illustrates the transit of the ringed planet, whose impact parameter is b. We assume a thin uniform ring with a constant optical depth τ for the light from the direction normal to the ring plane.Thus the fraction of the background stellar light transmitted through the inclined ring is given by exp(-τ (sinθcosϕ)^-1), and we define the shading parameter T as 1-exp(-τ (sinθcosϕ)^-1). In our simple ring model,the value of T, instead of τ, fullyspecifies the effective optical transparency of the ring.In summary, our simple ring model is characterized by five parameters; four(R_ in, R_ out, θ, ϕ) specify the geometry of the ring,the other is a shading parameter T. Instead of R_ in andR_ out, we use dimensionless parameters in fitting,: r_ in/p≡R_ in/R_ p,r_ out/in≡R_ out/R_ in. §.§ Transit signal of a ringed planet The stellar intensity profile I(x,y) under the assumption of the quadratic limb darkening law is expressed in terms of two parameters u_1 and u_2: I(x,y)/I_0 = [ 1 - u_1( 1-μ) - u_2 ( 1-μ ) ^2]( μ≡√(1-x^2+y^2R_⋆^2)), where I_0 is the intensity at the center of the star.The physical conditions on the profile require the followingcomplex constraints on u_1 and u_2: u_1 + u_2 < 1,u_1 > 0,u_1 + 2 u_2 > 0. In this paper, we adopt q_1 = (u_1 + u_2)^2 and q_2 = u_1/(2(u_1+u_2)) instead of (u_1, u_2) following <cit.>.Then, Equations (<ref>) and (<ref>) are rewritten as I(x,y)/I_0= [ 1 - 2 q_2√(q_1) ( 1-μ)- √(q_1)(1-2q_2) ( 1-μ ) ^2], with0< q_1 <1,0 < q_2 <1. In this parametrization, q_1 and q_2 vary independentlybetween 0 and 1. This is useful in finding best-fit parameters <cit.>.For reference, the Sun has q_1 = 0.49 and q_2 = 0.34 (u_1 = 0.47 and u_2 = 0.23) <cit.>. Let D(x,y,t) be the blocked fraction of light coming from the location on the stellar disk (x,y). Due to the motion of the planet during a transit, D(x,y,t) is time-dependent and given as D(x,y,t) =1 :if (x,y) is within the planetary diskT :if (x,y) is within the ring disk,but out of the planetary disk0 :otherwise. Then the normalized flux from the the system is given by F(t) = 1-∫_stellar diskI(x,y) D(x,y,t) dx dy / I_ all, where the second term indicates the fraction of light blocked by a transiting ringed planet, and the total flux is I_ all = ∫ _stellar disk I(x,y) dx dy= π I_0 R^2 _⋆ [ 1 - 2√(q_1) q_2/3- √(q_1)(1-2q_2)/6].We develop a reliable numerical integration method that solves the boundary lines of D(x,y,t) as described in Appendix A. Our method achieves the numerical error less than 10^-7 in relative flux, and this is much smaller than a typical noise of the Kepler photometric data. §.§ Effects that are neglected in our model We briefly comment on three effects that we neglect in the analysis below; finite binning during exposure time, planetary precession, and forward-scattering of the ring. While all of them are negligible for the Saturnian ringed planet with a long period, they may become important in other situations. For the precise comparison of our light-curve predictions against the Kepler long cadence, we may have to take account of the finite exposure time(29.4 min) properly. In fact, the binning effect is shown to bias the transit parameter estimate in the case of short-period planets <cit.>.For long-period planets that we focus on here, however, the transit duration is sufficiently longer than the finite exposure time. Thus the binning effect is not important.In thecase of the transit of Saturn in front of the Sun, for instance, the fractional difference of the relative flux is typically an order of10^-5 between models with and without the binning effect. This value is an order-of-magnitude smaller than the expected noise in the Kepler photometric data.Thus we can safely neglect the binning effect in the present analysis.The precession of a planetary spin would generate observable seasonal effects on the transit shape of a ringed planet<cit.>.Since our current target systems are extracted from those with a single transit, however, we can ignore the effect; the period of the precession is proportional to the square of the orbital period, and thus the precession effect during a transit is entirely negligible. Nevertheless,we note here that this could be an interesting probe of the dynamics of short-period ringed planetary systems that exhibit multiple transits. In the present analysis, we consider the effect oflight-blocking alone due to the ring during its transit.In reality, forward scattering (diffraction by the ring particles) may increase the flux of the background light. Let us consider light from the star to the observerthrough the ring particle with diameter d. First, light is emitted from the disk of the star,and arrives at the ring particles. The angular radius of the star viewed from the ring particles is about R_⋆/a, wherea is the semi-major axis of the orbit,and R_⋆ is the stellar radius. Next, the light is diffracted bythe ring particles, and the extent of the diffraction is describedby the phase function <cit.>; the rough diffractionangle can be estimated from the first zero of the phase functionθ≃ 0.61λ/d, where λ is wavelength of light.In particular, the effect of the diffraction becomes significant whenthe viewing angle R_⋆/a is comparable to the diffraction angle. Let us define the critical particle size d_ crit by equating R_⋆/a withθ = 0.61λ / d;d_ crit = 0.61 a λ/R_⋆ = 0.63 mm (a/R_⋆/2060) ( λ/500 nm).<cit.> discussed the effect of diffraction using d_ crit.When d ≥ 10 d_ crit,the diffraction angle is small, and light just behind the ring particles is diffracted to the observer. In this case, the diffractiondoes not affect the direction of light, andwe may express the extinction due toabsorption with a single parameter T.When d ≤ d_ crit/10,the diffraction angle is large, and the ringparticles diffract light to wider directions. Then, the amount of light thatreaches the observer significantly decreases, and we may model the extinction in terms of T. In both cases, d ≥ 10 d_ crit and d ≤ d_ crit/10, the extinction can be modeled with a single parameter T. In the case of Saturn with the typical particle diameter d =1 cm, for instance, d_ crit≃ 0.63 mmfrom Equation (<ref>) satisfies d>10 d_ crit, so our model can be used to calculate the light curves of Saturn observed far from the Solar System. We should note that when the typical size of particles satisfies d_ crit/10 ≤ d ≤ 10 d_ crit, the forward scattering induces the rise in the light curvebefore the ingress and after the egress, and this effect can become the key to identify the signatures of the rings out of other physical signals.Incorporating the diffraction into the model, however, requires intensive computation,and this is beyond the scope of this paper. § CLASSIFICATION In what follows, we present our methodology to search forplanetary rings in the real data. Figure <ref> shows theflow chart of the analysis procedure and its application.Methods in each step of the chart are described alongwith the results of analysis in the following sections.In this section, we first choose target objects found in the Kepler field.Then, we classify them into four categories depending onthe observed anomalies in the light curves.The details of classification procedure may be found in Appendices B and C.§.§ Target SelectionThe Kepler mission monitored more than 150,000stars over four years, and identified about 8,000 planetcandidates as Kepler Objects of Interest (KOIs).In this paper, we focus on long-period planet candidatesbecause icy ring particles as observed around Saturn are supposed tosurvive only at locations far from the host star. Considering that the temperature of the snow line is 170 K <cit.>,we choose 37 KOIs whose equilibriumtemperatures are less than 200 K. In addition, we selected planet candidates reportedby recent transit surveys; 41 candidates from a search by <cit.>and 28 candidates from <cit.>.In Table <ref>, the numbers of planetary candidates in three groupsare listed with the number of transits.We exclude several systems, which are not suited for our search.For KOI-5574.01 in KOIs and KIC 2158850 in <cit.>,we cannot find the transit signal among the noisy light curves.For KOI-959.01 in KOIs with P=10 days and KIC 8540376in <cit.> with P= 31.8 days,we cannot neglect the binning effect due to the short transit duration.After removing these systems, 89 planet candidates are left in total for our search.Tables <ref> summarizes the number of targets, and Figure <ref>shows the overlapped objects among KOIs, <cit.>, and <cit.> §.§ Classification of target objects Inevitably a signature of a possible ring around a planet is very tiny. Long-period planet candidates exhibit a small number of transits(Table <ref>), and the precision of the transit light curvesis not improved so much by folding the multiple events. Therefore the searchfor a possible ring signature crucially relies on the quality of thefew transiting light curves for individual systems. According to the automated procedures described in Appendices B andC, we classify the long-period planet candidates into the following four categories.= 0pt(A) insufficient S/N to constrain ring parameters: Since the anomalous feature due to the ring is very subtle, one cannot constrain the ring parameters at all if the intrinsic light-curve variation of the hostis too large tobe explained by any ring model. Thus we excludethose systems that exhibit a noisy light curve out-of-transit.The exclusion criteria depend on the adopted ring model to some extent, but are determined largely by the threshold signal-to-noise ratio (S/N) that we set as S/N=10. For definiteness,we consider 4 different ring models (Table <ref>), and the details of the procedure are described in Appendices B and C.(B) sufficient S/N and no significant anomaly:A fraction of the systems has a sufficiently good S/N and exhibits no significant anomaly. In such a case, we can put physically meaningful constraints on the possible ring parameters (Section 4).(C) too large anomaly for a ringed planet:In contrast to (B), some systems exhibit a large anomaly in the transiting light curve that exceeds the prediction in the adopted ring models. Nevertheless, different ring models may be able to explainthe anomaly, and we still continue to search for ringed planets in this category (Section 5). (D) reasonable anomaly for a ringed planetFinally a small number of systems with a good S/N indeed exhibita possible signature that could be explained by the ring model.We perform additional analysis to test the validity of the ringhypothesis in a more quantitative fashion (Section 5 and 6). = 13ptThe above classification is done on the basis of observed anomalies, whichare derived by fitting a planet model to light curves.The data are taken form the Mikulski Archive for Space Telescopes (MAST), and weuse the Simple Aperture Photometry (SAP) data taken in thelong-cadence mode (29.4 min).In fitting, we use only the first transit in the light curve foreach candidate in deriving the observed anomaly for simplicity.After fitting the planet model to data, the long-period planet candidates are automaticallyclassified into the above categories (A)∼(D). Table <ref> summarizes the results of classification for four models. In a later section, we use the classificationaccording to model I, which contains more candidatesin categories (B)∼(D) than the other three. In fact, the choice of model I is partly reasonable because the distant planets potentially have tilted rings like Saturn because of low tidal force. As candidates in (A) have insufficient S/N for further analysis,we do not consider them in the following analysis.In section 4, we obtain upper limits on R_ out/ R_ p for candidates in(B). In section 5, we first search for the ringed planetsin categories (C) and (D) by visual inspection,and later examine the reliability of transits more quantitatively.In section 6, we interpret the possible ringed planet candidate. § UPPER LIMITS OF R_ OUT/R_ P FOR CANDIDATES IN (B) Upper limits on R_ out/R_ p are given for candidates in (B)as a result of classification. Figure <ref> shows the light curves and fitted curvesof eight candidates classified to (B) in model I. They show no appreciable anomaliesin the residual relative to the single planet model.For these candidates, we could detect the ring signature if exists.Thus in turn, we can derive the upper limits on R_ out/R_ p.This is done by simply comparing the expected anomaly in model Iand the observed anomaly in the light curve.The details of the method to place upper limits onR_ out/R_ p are described in Appendix B and C,and the results are summarized in Table <ref>. § SEARCH FOR RINGED PLANETSIn this section, we search for ringed planets in categories (C) and (D), extractthe tentative ringed planet candidates, and examine whetherthe transits are notfalse positive.§.§ Tentative selection of possible ringed planetsFigures <ref> and <ref> show thelight curves of candidates in categories (C) and (D),respectively. Candidates in (C), wherethe observed anomaly exceeds the prediction ofmodel I, may be consistent with other ringed planetsin different configurations.Thus, we search for ringed planets not only in (D) but also (C).We extract ringed planet candidates by visual inspection of theirlight curves on the basis of following properties expected for ringed planets:* Duration of ingress and/or egress is long.* Transit shape is asymmetric due to the non-zero ϕ.As a result, we identify five systemsKOI-771(D), KOI-1032(C), KOI-1192(D), KOI-3145(D), and KIC 10403228(D)as tentative ringed planets. For the other four candidates in (D), whichshow no visible ring-like feature in the light curves,we obtain the upper limits on R_ out/R_ p in the samemethod as in the previous section (Table <ref>).In total, we obtain the upper limits onR_ out/R_ p for 12 candidates, andthe six of them have R_ out/R_ p≤ 1.5. For six candidates in (C) with no ring-like features,we cannot set the upper limits of ring parameters, andwe conclude that the signals are not due to rings,but are due to the temporal stellar activities.§.§ Elimination of false positives We examine the reliability of transit signalsfor the five preliminary candidates.As a result, we find that four are false positives,and KIC 10403228 still passes all criteria.More specifically, we regard a target as afalse positive if one of the following criteria is satisfied <cit.>.Criterion 1: The target object exhibits a significant secondary eclipse,which is expected for an eclipsing binary. - Results: None of our candidates exhibits the secondary eclipse.Criterion 2: The signal originates from the other nearby stars orinstrumental noise. - Results: Inspecting Target Pixel Files,we found that the dips in the light curves ofKOI-1032.01, KOI-1192.01, and KOI-3145 do not come from the target stars.Figure <ref> shows an example of KOI-1192.01.Community Follow-up Observing Program (CFOP)classifies KOI-1032.01 as a false positive <cit.>.<cit.> and <cit.>also indicate that KOI-1192.01 and KOI-3145 are false positivities.Moreover, we find that the transit depths in the light curves of KOI 771.01differ in many pixels, and the contaminations from the non-target stars arevery strong. <cit.> also pointed out thatthis system is false positive.For KIC 10403228, the transit depths differ in only two pixels, while itis constant in the other pixels, so we conclude that the signalis originated from the target star. The more detailed discussionof KIC 10403228 is presented in a later section.Criterion 3: The transit simultaneously occurs at different starsin different pixels. This indicates that the signal does not originate fromthe target but from the instrumental noise. - Results: The transit events of KOI-1032.01and KOI-1192.01 are located at the same time. This result isconsistent with that of the Criterion 2.Criterion 4: The shape of the light curve is inconsistent with that ofa transiting object. - Results: From Figures <ref> and <ref>,all signals fit well to transit-like features. KIC 10403228 is the single system that passes all the criteria.Thus, we move on to the detailed pixel-based analysis next. §.§ Detailed pixel analysis on KIC 10403228KIC 10403228 is considered to be an M dwarf and hasa nearby star separated by about 3 arcsec <cit.>.According to the data taken by United Kingdom Infrared Telescope (UKIRT), the nearby star is located at (RA, Dec) = (19^ h24^ m54.25^ s,+47^∘ 32'57.5”) and its J-band flux is about 1/5 of KIC 10403228.Here we examine the possibility that the transitis associated with this nearby star rather than KIC 10403228.Figure <ref> shows the light curve and fractional depth of the transit eventin each of the pixels around KIC 10403228. The small transit depths in pixels A and B suggest that the source of the transit is not the nearby star shown by a red filled star, because otherwise the transit depths should be larger in those pixels close to the nearby star. To confirm this fact in a more quantitative way,we also calculate the centroid offset using the pixel-level light curves.As a result, we find that the flux centroid moves towards the nearby star during the transit and that the displacement is comparable to the value expected from the observed transit depth (5%) and the flux ratio in J-band (5:1).The variation of the transit depth and the centroid displacementconsistently indicate that the transit is not due to the nearby star.While we may be able to evaluate the contamination on the light curvefrom this nearby star more quantitatively, it does not change our conclusion in any case, and we do not perform the detailed analysis for simplicity. We note that the transit signal contains a clearshort-period modulation (panel B in Figure <ref>).Since the modulation is not visible at panel C, it is most likelydue to the nearby star. Actually, there is another long-period modulation withP≃ 35 days in the light curve, which maycome from the target star. If these periods are related to the stellar spins,the nearby star is a fast rotating star, and the target star is aslow rotator. Thus, we may ignore the effect ofgravity darkening of the target star.§ DETAILED ANALYSIS OF A POSSIBLE RINGED PLANET KIC 10403228 For the further study of KIC 10403228,we present and discuss three possible models accounting for the data: “planetary ring scenario", “circumstellar disk scenario", and “hierarchical triple scenario". We also discuss other possibilities than the above three models. §.§ Interpretation with a ringed planet We fit various models with and without the ring to the light curve of KIC 10403228 by minimizingthe value of χ^2 defined in Equation (<ref>).In practice, we use ± 3.09 days-time window to trim 300 data points centered around T_0 = 744.773 day (BKJD [= BJD-2454833 day]).To remove the long-term flux variations in the light curve,we adopt the model in Equation (<ref>) that iscomposed of a fourth-order polynomialand the transit model F(t) in Equation (<ref>). The standard deviation σ is estimated to be 9.17 × 10^-4 from theout-of-transit data. This value isabout 1.3 times larger than the error recordedin the SAP data. As the transit of KIC 10403228 is observed just once,we cannot infer the orbital period from thetiming of the transit. However, we can infer it from Kepler's law.The depth and V-shape of the observed transit imply that the transiting object is relatively large and grazing. Thus,we approximate the total transit duration T_ tra as T_ tra≃P ( 2R_⋆/2πa)(√(1-e^2)/1 + e sinω),where the last factor is a correction term due to an eccentricity e withω being the argument of periapse.From Kepler's law and Eq (<ref>), one obtainsP ≃450years(ρ_⋆/ 12.6 g cm^-3)( T_ tra/2 days)^3(1 + e sinω/√(1-e^2))^3 . We obtain P ≃ 450 years if we adopt e=0 and T_ tra = 2 days for the transit duration of KIC 10403228, and the stellar density ρ_⋆=12.6 ± 6.0 g cm^-3 from <cit.>.The stellar density in <cit.> is adopted from<cit.>, who estimated the stellar propertiesby comparing the observed colors taken in 2MASS and SDSS with the Dartmouth model <cit.>. Before fitting, we simply examine how often we expect to see a transit of a planetwith P ≃ 450 years. Assuming that all the stars hostplanets with P ≃ 450 years, the expected number oftransit detections is given byn_ tra = 0.045(N_ target/150,000) (t_ obs,dur/P/4years / 450years) (R_⋆/a/ 1/25,000),where a/R_⋆ = 25000 is the fiducial value estimatedfrom equation (<ref>), t_ obs, dur is aobservational period, and N_ target is the numberof target stars. The adopted values of t_ obs, dur and N_ target arethe typical values of Kepler.The frequency of planets with P=450 yrs would be less than 1, so we may seen_ tra as the optimistic upper limit of the expected value.This current value of n_ tra=0.045 is small, but not too unlikely. Apart from the tiny ring-like feature, the overall shape of the signal is clearly dueto the transiting event, and it is very difficult to explain the feature from the stellar activities. We would like to comment on the reliability of P≃ 450 years.The key parameters are ρ_⋆ andthe eccentricity in Eq (<ref>). For example, if the systemis a giant star rather than an M dwarf, the density and the periodbecome smaller.In this sense, to specify the correct stellar density, we would need a follow-up observation. Moreover, the eccentricity can also change the estimated period in Eq (<ref>). If e = 0.6, the period can be changed by the factor of(1/8.0)–8.0, and if e = 0.9, the factor of change is within (1/82.82)–82.82 (or 5 years <P< 34,000 years). Thus, the planet with a relatively short period and a large eccentricity can also explain the data. Although the period is uncertain,the different period does not change the fitting results, so we adopt P= 450 years for the fiducial value for the time being.For fitting, we adopt P= 450 years, and q_1 and q_2 from the official catalog of Kepler.In summary, there are nine free parameters,t_0,R_ p/R_⋆, b, a/R_⋆, and c_i (i=0–4) for the model without the ring, and five additional parametersθ, ϕ, r_ in/p, r_ out/in and T for the model with ring.We set the initial values of c_i (i=0–4) to those obtainedfrom a polynomial curve fitting for the out-of-transit data. First, we fit the planet alone model to the data. The blueline in Figure <ref> is the best-fit model without the ring.The best-fit parameters are listed in Table <ref>.The residuals from the fit clearly have some systematic features,and the planet alone model fails to fully explain the light curve,in particular, around 745.8 day (BKJD) in Figure <ref>.Therefore, we attempt to interpret the data with the ringed planet model.After trying a lot of initial values for fitting, we finally find two solutions,which give at least local minima of χ^2 in Equation (<ref>).Figure <ref> shows those two solutions in the red and green lines. Thebest-fit parameters are shown in Table <ref>.The geometrical configurations for both solutions are shownin Figure <ref>. Clearly, models with the ring significantly improve the fit. In Table <ref>, values of R_ p,R_ in, and R_ out are calculated on the assumption of R_⋆ = 0.33 ± 0.05 R_⊙ <cit.>.It turns out that the resulting ratio of ring and planet radii is similar tothat of Saturn: R_ in≃ 1.5 R_ p andR_ out≃ 2.0 R_ p. We comment on the implication of the fitted model for KIC 10403228 in the following.The radiative equilibrium temperature of the ring particle is given byT_ eq≃ 15.1K(25000/a/R_⋆)^0.5(T_⋆/3386 K)( 1-A/1-0.5)^0.25, where we fiducially adopt the Bond albedo of the ring particle A of 0.5.The stellar effective temperature T_⋆ = 3386 K of KIC 10403228 istaken from <cit.>. Since the equilibriumtemperature expected from the model is much lower thanthe temperature 170 K at the snow line <cit.>,icy particles around the planet can surviveagainst the radiation of the host star.The best-fit values of θ=59.4^∘ for solution 1 implies a significantly tilted ring with respect to the orbital plane, and θ=12.3^∘for solution 2 implies a slightly tilted ring. We examine the stabilityof those tilted rings on the basis of a simple tidal theory. Under the assumption that the ring axis is alignedwith the planetary spin, the damping timescale of the ring axis is equal to that for the orbital and equatorial planesof the planet to be coplanar. This time-scale is given bya tidal theory <cit.>:τ_ tidal =P_ orb Q / 9 π k_2ρ_p/ρ_s (a/R_⋆)^3≃G P_ orb^3 Q/27 π^2 k_2ρ_p = 6.94 × 10^16 yr(P_ orb/450 years)^3( 2.3 × 10 ^-4/k_2/Q) ( ρ_p/0.70g cm^-3),where P_ orb is an orbital period, Q is a dissipationfactor, and k_2 is the second Love number.If we adopt k_2/Q = 2.3 × 10^-4 <cit.>and ρ_p = 0.70 gcm^-3 <cit.>of Saturn, the damping timescale is sufficiently long.Thus, the best-fit configurations are consistentwith the spin damping theory even under the assumptionthat the equatorial plane of the planet is coplanar withthe ring plane. Thus, the tilted ringsof our best-fits also imply the non-vanishing obliquityof the planet. The ringed planet model is consistent with the data. However, the V-shape of the transit (Figure <ref>) is also a typical featureof eclipsing binaries, and the estimated period ∼ 450 yrs may be too long to be detected in four years of Kepler's observation (Equation (<ref>)).Therefore, we discuss other scenarios without a planetary ring. For this purpose, in the following, we present two possible hypotheses, which can also explain the data; a binary with a circumstellar disk and a hierarchical triple. §.§ Interpretation with a circumstellar diskIn this section, we pursue the possibility thatthe current transit is caused by an eclipsing-binary with a circumstellar disk rather than a planetary ring. Actually, the fitting result in the previous section is also applicable to this binary scenario, so we may compare the plausibilities of the eclipsing-binary and planet scenarios to test the circumstellar disk model. For this specific purpose, we use the public code VESPA (Validation of Exoplanet Signals using a Probabilistic Algorithm) <cit.>. With VESPA, we compare the likelihoods of the following four scenarios; “HEBs (Hierarchical Eclipsing Binaries)", “EBs (Eclipsing Binaries)", “BEBs (Background Eclipsing Binaries)", and “Planets" (Transiting Planets) adopting a variety of different periods. We adopt JHK-magnitudes from 2MASS (J-mag = 13.429 ± 0.028, H-mag = 12.793 ± 0.03, and K-mag = 12.518 ± 0.027), (RA, Dec) = (19^ h24^ m54.413^ s,+47^∘32'57.5”), maxrad = 3.0 arcsec (angular radius of the simulated region), Kepmag = 16.064, and R_ p/R_⋆=0.3. In reality, those observed colors might be contaminated by the nearby star discussed in Section 5.3, but we assume that the contamination is sufficiently small in thepresent analysis. Given these inputs, VESPA calculates the star populations and the probability distribution of transit shape parameters for the above four scenarios. For our adopted set of input parameters, VESPA identifies the primary star as an M dwarf consistent with the classification of <cit.>.We repeat the simulation ten times with different initial random numbersaccording to the prescription of VESPA. Figure <ref> shows the relative probability of each scenario for different assumed periods. We define the relative probability as the product of the “prior" and “likelihood" computed by VESPA, multiplied by 1000 days/P. The last factor 1000days /P corrects for the probability that a long-period transit is observed in a given observing duration much shorter than the orbital period, which is not taken into account in the “prior" of VESPA. The plot shows the medians and the standard deviations of the probabilities computed from 10 sets of simulations. While the binary scenarios are more likely than the Planets scenario for the shortest and the longest periods investigated here, the Planets scenario is the most preferred in the intermediate region (10 years ≲ P ≲ 100 years). The result suggests that the planetary interpretation of the light curve is not so unlikely compared to the binary scenario, although there is a fair amount of probability that this is a false positive. Another important implication of Figure <ref> is that the likelihood of orbital periods in the Planets scenario is much broader than what we intuitively thought before, and not sharply peaked around 450 years. While Figure <ref> represents our final result from VESPA, we point out two additional factors that may be of importance for more detailed arguments.First, the period distribution and the overall fraction of long-period planets and binaries have not been taken into account. Occurrence rate of giant planets around M dwarfs is given by <cit.>. They estimated the frequency of the planets with 10^2 M_⊕<M_ p <10^3 M_⊕ to be 0.039 ^+0.042_-0.025 for 10^3 days<P < 10^4 days and 0.013^+0.025_-0.010 for 10^4 days<P < 10^5 days. On the other hand, <cit.> estimated the multiplicity distribution of the binaries in 3–227 AU and found the overall occurrence rate 0.27± 0.03 peaked around 10 AU. These results imply that planets around M dwarfs is rarer than its stellar companion by one or two orders of magnitude. This difference in the overall frequency may further increase the relative plausibility of the EBs scenario compared to the Planets scenario.Second, what also matters in reality is the frequency of planetary rings and circumstellar disks that produce the observed anomaly in addition to the transit signal. It is, however, far beyond our current knowledge to estimate these factors rigorously. Given these difficulties, follow-up spectroscopy or high-resolution imaging would be more feasible to distinguish the EBs and Planets scenarios.§.§ Interpretation with a hierarchical triple An eclipse due to a close binary (rather than a single star/planet) on a wide orbit around the primary M star is yet another possibility to explain the asymmetric and long transit-like signal observed for KIC 10403228. This is because the orbital motion of the occulting binary can produce the acceleration that modifies the in-eclipse velocity of the occulting object(s) relative to the primary. To test this possibility, we consider a hierarchical triple system consisting of a short-period binary (“inner" binary) orbiting around and eclipsing the primary M star on a wide orbit (“outer" binary). In the following, we only take into account the luminosity of the occulted star and ignore the flux from the smaller binary.We also assume the orbits of both inner and outer binaries are Keplerian, and use the subscripts “in" and “out" to denote their parameters. A mass ratio of the inner binary is fixed to 1 for simplicity.In this model, the motion of the two components of the inner binary is specified by t_ 0, in (inferior conjunction of the inner binary), P_ in, a_ out/a_ in, i_ in, Ω_ in (longitude of ascending node relative to that of the outer binary), in addition to the parameters for a single-planet model (now with the subscripts “out"). We fit all of these parameters except that the stellar density ρ_⋆ = 13.0gcm^-3, q_1 = 0.6737, q_2 = 0.767, the time offset T_ 0=744.773 days in Eq (<ref>), and the samebaseline as obtained in Table <ref> (solution 2) are fixed. The mass ratio of the outer binary is related to P_ in, P_ out, and a_ out/a_ in as q ≡M_ in/M_ out = 1/(a_ out/a_ in)^3 (P_ in/P_ out)^2 - 1,where M_ in is a total mass of the inner binary, and M_ out is the mass of the primary star. Figure <ref> shows one of the best-fitting models with P_ out = 1396.615 days,a_ out/a_ in = 54.53, Ω_ in = -0.00945, P_ out = 10.96 days, b_ out = 2.024, cos i_2 = 0.103, t_ 0, out = 1.965× 10^-3 days, and t_ 0, in =1.965× 10^-5 days. In this solution, we find q=0.0956, which leads to M_ in = 30 M_ J. We also obtain χ^2 / dof = 379.3/292, which is comparable to the ringed-planet model. In this solution, the observed ∼2 days duration is reproduced despite that the value of P_ out is much shorter than required for the planetary scenario. This is made possible because the orbital motion of the inner binary cancels out the high orbital velocity of the outer binary. In addition to this particular solution, we find various other solutions with similar χ^2 values for a wide range of P_ out.In general, the solutions with longer P_ out are found to correspond to smaller q; for example, we find q≃ 0.02 for P_ out≃ 30yrs, and q≃ 0.003 for P_ out = 300yrs.For M_⋆ = 0.3 M_⊙, these mass ratios q=(0.1, 0.02, 0.003) translate into M_ in = (30 M_ J, 6 M_ J, M_ J). Thus, in this scenario, the system can be composed of three low-mass stars or a star with a binary planet.The advantage of this scenario is that the observed long transit can be reproduced with much smaller P_ out than in the ringed-planet model, which leads to far higher transit/eclipse probability. On the other hand, it is also true that the parameters need to be finely tuned to cancel the two orbital motions. While the degree of required fine tuning is crucial in comparing the evidence of this hypothesis with the planetary or stellar ring models, the evaluation of this factor is not trivial given the large parameter space. In addition, there still remain uncertainties in frequency of the hypothetical hierarchical triple (three low-mass stars or a star with a binary planet). Given these complexities, it is difficult to conclude whether or not this scenario is favored compared to the above two. Again, the follow-up observation will be effective for the further study.§.§ Possibilities other than a ringed object and a hierarchical tripleSo far, we present the three leading scenarios “planetary ring scenario", “circumstellar disk scenario", and “hierarchical triple scenario". There still remain other possibilities that may potentially account for the light curve of KIC 10403228. In this section, we examine these possibilities and show that they are unlikely to explain the data. Throughout this section, we basically assume that the transit is caused by a planet, but the results in this section are also applicable for the stellar eclipse. §.§.§ Oblate planetA significant oblateness of a single planet may mimic a ring-like anomalyduring a transit. Indeed our model reduces to an oblate planet if we setR_ p = 0, R_ in = 0, and T=1.0 withan appropriate choice of θ and ϕ.We attempt the fit of this oblate planet model to the light curve,and obtain the best-fit withχ^2 /dof = 492.4/288.This value is much larger than the best-fit value χ^2/dof = 349.1/286with the model with a ring. Furthermore,the best-fit oblate planet model requires the projectedellipticity of f=(a-b)/a= 0.79,where a is the major axis, and b is the minor axis.This solution is an unstable configuration;the rotating object will break up due to the centrifugal forcewhen a≥ 1.5b (Equation (2.14) in <cit.>).Thus, we conclude that the oblateness of the planet isunlikely to explain the observed anomaly.§.§.§ Additional transit due to exomoonIn Section 6.3, we only consider an additional motion of an occulting object due to an accompanying object. However, a transit of the accompanying object (e.g. exomoon) itself is yet another possibility for the peculiar light curve of KIC 10043228. As shown below, this possibility is ruled out by the shape of the anomaly.As shown in Figure <ref>, the anomaly in the light curve is significant only in the latter half. Motivated by this fact, we fit the light curve using the planet-alone model, masking the latter half of the transit and adopting the same baseline as obtained in Table <ref> (solution 2); the difference between this model and the observed light curve would represent the anomalous contributions from anything other than the main transiting planet. The result in Figure <ref> clearly shows that the anomaly consists of a short rise in the flux followed by a more significant dip. Such a feature is clearly inconsistent with the transit of an exomoon.§.§.§ Anomalies specific to in-transit dataThere exist anomalies specific to in-transit data; spot crossing and gravity darkening.If the planet crosses spots on the stellar surface,the light curve is deformed <cit.>.In general, however, spots are dark, so spot-crossing causes a bump inthe light curve. The observed anomaly in the bottompanel of Figure <ref> is inconsistent with a single bump,so the spot is unlikely to cause the anomaly.Gravity darkening makes the light curve asymmetric<cit.>.In section 5.3, we identify the target star as the slow-rotating star,and the gravity darkening is negligible.In conclusion, these mechanisms are unlikely to explainthe ring-like signal in the light curve.§.§.§ Stellar noiseThe ring-like structure in the light curve shows up only for a short duration.Thus, the short-term stellar noise might mimic the ring-like anomaly just by chance.To discuss this possibility, we investigate the statistical property of the stellar activity ofKIC 10403228.Specifically, we consider how frequently one encounters stellar noisescomparable to the anomalous in-transit residuals. As will be shown, we findit difficult to reproduce the feature with stellar activities of KIC 10403228.In principle we could check to see if the similar feature arises in stars other than KIC 10403228 more generally, but it is a separate question and does not answer if the signal for the particular star is due to that stellar activity.Therefore we analyze the light curve of KIC 10403228 alone in this section.To focus on the short-term noises, we remove the long-term variations by dividing the light curves into short segments and fitting each of them with polynomials. The more specific procedure is as follows. We exclude in-transit data as well as data around gaps in the light curve. From the remaining data, we pick up a segment of 6.18-day long light curve centered around a randomly chosen time and fit it with a quartic polynomial to remove the variation within the segment. In principle, one could use different functions (e.g. a spline function) or different time-window for detrending, but in any case the final results are insensitive to these choices. For consistency, we adopt the same baseline and time-window as those used in Section 6.1. We iterate “picking up a segment" and “detrending" procedures 1000 times and obtain 1000 segments of detrendedlight curves, whose centersare randomly distributed over the whole observing duration.We note that the total number of points in the detrended segmentsis 1000× 300=3.0× 10^5,which is sufficiently large to sample all the original data points (N=10,000).By averaging the 1000 detrended light curvesat each time, we obtain one light curve.This averaging operation suppresses the dependence on the choice of thecentral time of each segment.Figure <ref> shows the resulting detrended light curve (bottom) alongwith the light curve before detrending (top).Now we move on to the comparison of the statistical propertyof stellar activities and the residuals of fit in Figure <ref>.Let us define F_ data(t) as the flux ratio of the detrendedlight curve with respect to the mean. To investigatethe short-term correlation of stellar activities,we divide the light curves into continuously brightening events (F_ data(t)> 1)and fading events (F_ data(t) < 1).Then,we compute the duration and amplitude (average of the deviation from themean |F_ data(t) - 1|) for each event.For comparison, we also calculate the duration and average relative fluxfor events in residuals in Figure <ref>.The left panel in Figure <ref> is the scatter plotof the duration and average relative flux of events for three groups;(a) all events out of the transit (the black data in Figure <ref>). (b) residuals of the ringed-planet fit (the red line in Figure <ref>). (c) residuals of the single-planet fit (the blue line in Figure <ref>).The right panel in Figure <ref> showsthe distributions of durations for the three groups.In each duration bin, the vertical axis showsthe total number of points in all events with that duration.The distribution of (a) is normalized to give thesame total number of events as (b) and (c).The quoted error-bars are simply computed from Poisson statistics of the number of each event.Figure <ref> showsthat the distribution (b) is closer to (a) than (c). Thus, the ringed planet modelis better than the planet model in terms of property of thecorrelated noise.So far, we have shown that the ring-like anomaly cannot be explained statistically.We further consider whether the stellar noise can mimic the light-curve shape itself.We examine this hypothesis by focusing on the most significant fadingevent in the out-of-transit data; see the left panel of Figure <ref>.The light curve of this event is shown in Figure <ref>.We would like to see if the combination of the planetmodel and this event can reproduce the ringed-planet like feature.To do this, we appropriately embed the transitof the planet into the light curve around the fading event.Here, the parameters of the planet are the same as in Table <ref>.Then we fit the two models with and without a ring to thosedata, as shown in Figure <ref> (b). As a result, we find the difference inχ^2 of the two models to be157.9, which is smaller than 434.9 obtained in Section 6.1 for solution 2.Thus, we conclude that it is difficult to reproduce the ring candidate bycombining the stellar activities and the transit of the planet. §.§.§ Combination of the above mechanismsIn principle, a combination of the mechanisms discussed above could be invoked to reproduce the observed anomaly. In Figure <ref>, for example, the bump and dip in the residual might be explained separately by a spot crossing and an exomoon. However, such a probability is a priori very low, and so we do not discuss those possibilities any further. § CONCLUSION AND FUTURE PROSPECTSIn this paper, we present a methodology to detect exoplanetary rings and apply it to the 89 long-period transiting planet candidates in the Kepler sample for the first time. After fitting a single planet model to light curves of target objects, we classify them into four groups depending on the observed anomalies and model predictions.Assuming grazing geometry and a titled ring, we obtain upper limits on R_ out/R_ p for 12 planet candidates, and find R_ out/R_ p<1.5 for six of them.While we select five preliminary ringed planet candidates using the results of classification, four of them turn out to be false positives, but KIC 10403228 still remains as a possible ringed-planet system. We fit our ringed planet model to the light curve of KIC 10403228, and we obtain two consistent solutions with the tilted ring. However, the V-shape of the current transit is a typical feature of an eclipsing binary, and the estimated orbital period P=450 years on the assumption of a circular orbit may be too long for the transit to be detected. Therefore, we also consider other two possibilities accounting for the data. One model assumes that the transit is caused by an eclipsing binary, and the ring-like feature is caused by a circumstellar disk rather than a planetary ring. For comparison, using the public code VESPA, we calculate the plausibility of this scenario and the planet scenario, and find that we cannot exclude both possibilities at the current stage. The other model we consider assumes the observed eclipse is caused by two objects orbiting around each other (hierarchical triple configuration), where the orbital motion of the smaller binary produces the long and asymmetric eclipse as observed for KIC 10403228. Assuming this model, we find various solutions for a wide range of orbital periods down to P≃ 1400 days, although it requires more or less fine-tuned configurations.In addition to the above scenarios, we also discuss other possibilities, andfind that none of them are likely to explain the data.In conclusion, there remain the three leading scenarios accounting for the data: “planetary ring scenario," “circumstellar disk scenario," and “hierarchical triple scenario." A follow-up observation would play an important role in the further study.The current research can be improved in several different ways.We can enlarge the sample of target objects towards those with shorter orbital periods. The interpretation of KIC 10403228 is fundamentally limited by the fact that it exhibits the only one transit. Obviously the credibility significantly increases if a system exhibits a robust ring-like anomaly repeatedly in the transits at different epochs. Moreover, difference in transit shapes at different epochs would enable us to discriminate between “disk scenario" and “hierarchical triple scenario." In addition, our current methodology puts equal weights on the data points over the entire transit duration. Since the signature of a ring is particularly strong around the ingress and egress, more useful information on R_ out/R_ p would be obtained with more focused analysis of the features around those epochs.We plan to improve our methodology, and attempt to apply it to a broader sample of transiting planets in due course. We do hope that we will be able to affirmatively answer a fundamental question “Are planetary rings common in the Galaxy?". § ACKNOWLEDGEMENTS We are grateful to the Kepler team for making therevolutionary data publicly available. We thank Tim Morton for helpful conversation, and anonymous referees for a careful reading of the manuscript andconstructive comments.M.A. is supported by the Advanced Leading Graduate Course for Photon Science (ALPS).K.M. is supported by the Leading Graduate Course for Frontiers of Mathematical Sciences and Physics (FMSP).This work is supported by JSPS Grant-in-Aids for Scientific Research No. 26-7182 (K.M.), No. 25800106 (H.K.) and No. 24340035 (Y.S.) as well asby JSPS Core-to-Core Program “International Network of Planetary Sciences". This work was performed in part under contract with the Jet Propulsion Laboratory (JPL) funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute.§ NUMERICAL INTEGRATION IN EQUATION (5) We present a formulation for fast and accuratenumericalintegration of Equation (<ref>). In addition to (x,y) coordinates defined in Section 2, we also introduce the cylindricalcoordinates (r,θ), whose origin is at thecenter of the star. The ranges of (r,θ)integrationare 0 < r < R_⋆ and0≤θ < 2 π. We integrate Equation(<ref>) by dividing the total range ofintegration into several pieces as follows:∫I(x,y) D(x,y) dS = ∫^R_⋆ _0∫^2π _0 I( √(1-(r/R_⋆)^2) ) D(r, θ) r dr dθ= ∑_i∑ _l D_i,l∫ _r_i ^r_i+1∫ _θ_i,l (r)^θ_i,l+1(r) I( √(1-(r/R_⋆)^2) )r dr dθ= ∑_i∑ _l D_i,l∫ _r_i ^r_i+1 ( θ_i,l+1(r) -θ_i,l(r))I( √(1-(r/R_⋆)^2) )r dr.The intervals of integration are specified by r_ i and θ_i,j(r).We will define them in the following, and the corresponding schematic illustration isdepicted in Figure <ref>. The number of the intersection pointsbetween a circle with the radius r andthe ringed planet depends on the value ofr; there exists boundary values rfor the number of intersection points. We define r_i asthe i-th boundary value, and we arrange a set of r_iin ascending order.If we have elementsr_i >R_⋆, we insert R_⋆ into the set of r_i, and exclude elementsthat satisfy r_i >R_⋆. Next, let ussuppose r_i < r < r_i+1, where the number of intersectionsremains the same. In this range, we defineθ_i,j(r) to be the j-th value of θof the intersection points between a ringed planet anda circle with the radius r. A set of θ_i,j(r)is also rearranged in ascending order, and we add0, 2π before and behind the set ofθ_i,j(r). We define D_i,l to bethe values of D(r,θ,t) forθ_i, l(r) < θ < θ_i, l+1(r)and r_i < r < r_ i +1. We will derive theequations for r_i and θ_i,j(r) in the rest of appendix.§.§ Derivation of r_iConditions for possible values of r_i are divided into the following three cases:(a) Intersections of the edge of the planet (circle) and the edge of the ring (ellipse). (b) Extreme points of the distance function from the center of the star to the edge of the planet (circle). (c) Extreme points of the distance function from the center of the star to the edge of the ring (ellipse).The number of r_i is at most eight for (a), two for (b), and two for (c). (a) and (b) are reduced to quadratic equations, which can be easily solved. The last case can be reduced to quartic equations. Here, we derive the quartic equations using the method of Lagrange multiplier. Let the length of the major axis be 2R and that of the minor axis be 2R(1-f), where f is the oblateness. We set the center of the ellipse to be at (x_ p, y_ p). For (x,y) on the edge of the ellipse, we define the following function: A(x,y,λ) = x^2 +y^2 + λ[ (x-x_ p/R)^2 + (y-y_ p/R (1-f)) ^2 -1 ]. f From the condition, we need∂ A/∂ x = 2 x + 2 (x-x_p) λ/ R^2=0∂ A/∂ y = 2 y + 2 (y-y_p) λ/ (1-f)^2 R^2 =0∂ A /∂λ = (x/R)^2 + (y/R-Rf) ^2 -1 = 0We reduce the above three equations to the following: λ ^4/(1-f)^4R^8+ 2 λ ^3/ ( 1-f)^2 R^4[ 1/R^2 + 1/ (1-f)^2 R^2]+ λ^2[1/R^4 +4/(1-f)^2 R^4 +1 /(1-f)^4 R^4 -x_ p ^2/(1-f)^4 R^6 - y_ p ^2/(1-f)^2 R^6] + λ[2/R^2 +2 /(1-f)^2 R^2 - x_ p^2 + y_ p^2/(1-f)^2R^4]+ 1 - x_ p^2/R^2 - y_ p^2/(1-f)^2R^2 =0. In general, quartic equations are analytically solved, but we compute the solutions for the equation using a root-finding algorithm, because of complexity of the analytic solution. x and y are calculated from derived λ as follows: x =-x_ p/1 + (λ/R^2) +x_ p, y= -y_ p/1 + (λ/((1-f)R)^2) + y_ p.The number of solutions for (x,y) is at most four. We exclude the solutions including complex numbers and/or (x,y) not on the ellipse. Equation (<ref>) gives the singular solutions when λ = - R^2,-(1-f)^2 R^2 . Inserting the above values into Equation (<ref>) or (<ref>), we find x_ p =0 or y_ p=0. In this case, we cannot use Equation (<ref>), but the conditions are reduced to the quadratic equations, which can be easily solved. §.§ Derivation of θ_i,l(r)To derive θ_i,l(r), we calculate the intersections of a circle, centered at (0,0), with the radius r, and a transiting object, composed of the circle (planet) and two ellipses (rings). The center of the ring system is (x_ p, y_ p). The intersections of two circles are easily computed and the number of the intersection points is two at most. Here, we derive the equations for intersection points of a circle and an ellipse. Let the radius of the circle be r. We select the same ellipse as before. For simplicity, we introduce the following parameters:A =1 - (1-f)^2,B = 2 x_ p (1-f)^2, C = (1-f)^2 R^2 - r^2 - (1-f)^2 x _ p^2 - y_ p^2, D = -2 y_ p.Then, an equation for x, the x-coordinate of intersections, is given by: A^2 x ^4 + 2 A B x^3 + ( 2 A C + B^2 + D ^2 ) x^2 +2 B C x + C^2 - D^2 r^2 = 0, Equation (<ref>) is a quartic equation, which is analytically solved. We solve this equation with the root-finding method in the same way as before. The number of the solutions for this equation is four at most. In total, there are up to 10 possible solutions for θ_i,l(r). §.§ Precision and computational time To test the precision and the computational time in our scheme, we simulate atransit of a Saturn-like planet with R_ p/R_⋆ = 0.083667, R_ in/p = 1.5,R_ out/p = 2.0,θ = π/3,ϕ = π/3, T = 1.0.We take P=10759.3 days, a/R_⋆ = 2049.89, b=0.5, q_1 = 0.49, and q_2 = 0.34 for orbital parameters and stellar parameters.For comparison, we prepare anotherintegration scheme, which adopts pixel-by-pixel integration around the planetary center <cit.>.First, we check the precision of the integration of our proposed method by comparing theprecision of the pixel-by-pixel integration methods with 5000× 5000 pixels.As a result, two methods are in agreement to the extent of 10^-7. Thus, ourproposed method achieves the numerical error less than 10^-7, which ismuch smaller than the typical noise in the Kepler data 10^-4. Second, we check the computational time of our template. Our proposed method typically takes 3.0 ms for calculating one point and 200 s in fitting in Section 6.1.For comparison, we also check the computational time of the planetary transit using PyTransit package <cit.>, and we find that it takes 0.3 ms tocompute all the 300 data points and 0.3 s in fitting in Section 6.1.Finally, we compare our method with the pixel-by-pixel integration. If we set the pixel sizes to satisfy the same computational time as that of our method, the precision of the integration becomes 10^-5 in the fiducial configurations. This precision depends on the specific configuration; it becomes 10^-4 for if we adopt “R_ p = 0.17 and b=0.8"and 3 × 10^-6 for “R_ p = 0.042 and b=0.3".In summary, when we need a high-precision model, one should use our proposed method, and, if not, one may use the pixel-by-pixel integration to save the amount of calculation. Incidentally, in a practical case of fitting with the Levenberg-Marquardt algorithm,our method is useful in a sense that gives the smooth value of χ^2. This is because the smoothness is needed to calculate the differential valuesfor χ^2 in LM method.§ METHOD OF TARGET CLASSIFICATION IN SECTION <REF> §.§ ConceptAs we demonstrated in the main text, signatures of a ringed planet can be detected by searching for any deviation from the model light curve assuming a ringless planet. The deviation is, however, often very tiny and comparable to the noise level, and so careful quantitative arguments are required to discuss the presenceor absence of the ring in a given light curve. In the following, we present a procedure to evaluate the detectability ofa ring based on the comparison between the residual from the “planet-alone" model fit and the noise level in the light curve.Let us denote one light curve including a transitby I_i (i=0, 1, ⋯, N_ data),where N_ data is the number of data points.We also define δ_i as the residual of fitting I_iwith the planet-alone model. As a quantitative measure of this residual signal δ_i relative to the noise level,we introduce the following signal-to-noise ratio:S/N = ∑_iδ_i^2/σ^2 = ∑_iδ_i^2/N_ data N_ data/σ^2 = Δ^21/(σ/√(N_ data))^2( Δ ^2≡∑_iδ_i^2/N_ data).In the last equality, we further define Δ^2 as the variance of the residual time series,and σ is evaluated as the standard deviation of theout-of-transit light curve. We use the subscript “obs" to specify the above quantities obtained by fitting the planet-alone model to the real observed data:δ_i, obs, S/N_ obs, and Δ^2_ obs. On the other hand, we can also compute the corresponding valuesof δ_i, S/N, and Δ, by fitting the simulated light curve of a ringed planet with the planet-alone model. We denote these values as δ^2 _i, sim(p),S/N_ sim(p), and Δ^2_ sim(p), where p represents the set of parameters of the ringed-planet model. If these values are sufficiently large compared to the noise variance (see Δ_ thr^2 below), the signal of the ringed planetis distinguishable from the noise. In addition, comparing these theoretically expected residual levels with observed ones, we can relate the observed residuals tothe parameters of the ringed model, even in the absence of clear anomalies.To simplify the following arguments,we mainly use Δ ^2 instead of S/Nto evaluate the significance of the anomaly (see also Section <ref> for detailed reason). Practically, conversion from one to the other is rather simple, as the conversion factor σ/√(N_ data) is well determined from the observed data alone; given a transit light curve, the transit duration T_ durand the bin size t_ bin give the number of data pointsN_ data = T_ dur/t_ bin, and the standard deviation σ can also be inferred from the out-of-transit flux.For a given region of parameter space p,Δ^2_ sim(p) has the maximum value Δ^2_ max, sim.If Δ^2_ max, sim is smaller thansome threshold value Δ^2_ thr determined by the noise level in the light curve,the ringed-planets with the corresponding value of p, even if they exist, cannot be detected in the system. Then, the comparison of Δ ^2_ obs,Δ^2_ max, sim, and Δ^2_ thrallows for classification into four categories schematicallyillustrated in Figure <ref>:(A): Δ^2_ max, sim<Δ^2_ thrThe expected signal from the ring is so small compared to the noise level that we cannot discuss its detectability.(B): Δ^2_ obs<Δ^2_ thr <Δ^2_ max, sim Although the rings with Δ^2_ thr < Δ_ sim^2(p) could have been detected, no significant anomaly is observed (Δ_ obs<Δ_ thr) in reality. Thus, the parameter region that gives Δ^2_ thr < Δ_ sim^2(p) is excluded. (C): Δ^2_ thr <Δ^2_ max, sim< Δ^2_ obs A significant anomaly is detected, but its amplitudeis too large to be explained by the ringed-planet model with the given range of p.(D): Δ^2_ thr<Δ^2_ obs <Δ^2_ max, sim A significant anomaly is detected, and its amplitude is compatible with the ring model. In this case, we may find the ring parameters consistent with the observed anomaly. The value of Δ^2_ thr is arbitrary.In this paper, we choose Δ^2_ thr so that it corresponds to S/N = 10 inEquation (<ref>):Δ^2_ thr= 10σ^2/N_ data,where σ and N_ data are calculated fromthe observed data.The methods to calculate the other variances, Δ^2_ obs,Δ^2_ sim(p), and Δ^2_ max, sim will be presented in the following subections.Before proceeding further, let us consider the orbital period dependence ofN_ data=T_ dur/t_ bin in Equation (<ref>).From Kepler's third law, T_ dur∝ P( R_⋆/ a) ∝ P^1/3.For the short-period planets, t_ bin∝ P because thenumber of folded transits is proportional to 1/P.Thus, the number of the data T_ dur/t_ bin is proportional to P^-2/3. This means that the detectability of rings (S/N) is higher for the shorter-period planetsfor a given value of Δ^2.This explains the strong constraints on the ring parametersobtained by <cit.> for hot Jupiters.§.§ Calculation of Δ^2_ obs §.§.§ DefinitionThe residual δ_i, obsis obtained by fitting the planet-alone model to the data.If the ring does not exist, the value of S/N_ obs in Equation (<ref>),which is formally equivalent to the chi-squared, is expected to be close to the degree of freedom DOF_ obs.In contrary, if the ring does not exist, S/N_ sim(p) is equal to zero.This mean that S/N_ obs- S/N_ sim(p) ≃ DOF_ obs in the limitof the non-ring system.Thus, for comparison of Δ^2_ sim(p) and Δ^2_ obs, the value of (S/N-DOF_ obs) serves as a good estimator of the observed anomalyrather than S/N. We thus slightly modify Equation (<ref>) to define Δ^2_ obs so that it corresponds to (S/N -DOF_ obs):Δ^2_ obs =(χ^2 - DOF_ obs) 1/(σ/√(N_ data))^2, whereχ^2 =∑_i(δ_i, obs/σ)^2.The residual δ_i,obs is defined for the best-fit planet-alone model obtained by minimizing χ^2 as described in Section <ref> below. The value of χ^2 is computed using the data just around the transit (within 0.6 T_ dur from the transit center) so that the value is not strongly affected by the out-of-transit data. We assume DOF_ obs = N_ data - N_ para -1, whereN_ para is the number of fitted parameters.§.§.§ Detail of fittingIn fitting, we minimize χ^2 using the Levenberg-Marquardt algorithmby implementing cmpfit <cit.>.The adopted model M(t) is composed ofa fourth-order polynomial and a transit model F(t): M(t) = F(t) [c_0 +c_1 (t - T_0)+c_2 (t-T_0)^2+ c_3(t-T_0)^3 + c_4(t-T_0)^4 ],where c_i are coefficients of polynomials, and T_0 is a time offset.The polynomials are used to remove the long-term flux variations in the light curve.The transit model F(t) is implementedby the PyTransit package <cit.>. PyTransit generates the light curves based onthe model of <cit.> with thequadratic limb darkening law. The above model M(t) includes 12 parameters,t_0,R_ p/R_⋆, b, a/R_⋆, P,q_1, q_2, and c_i(i=0 ∼ 4).For KOIs, the initial values of a/R_⋆, R_ p/R_⋆, and bfor fitting are taken from the KOI catalog.The initial values of the limb darkeningparameters are taken from the Kepler Input Catalog.For a single transit event, where we cannotestimate the orbital period from the transit interval,we choose P, instead of a/R_⋆, as a fitting parameter and estimate a/R_⋆ from P using Kepler's third law and the mean stellar density given in the catalog.In fitting, we remove outliers iteratively to correctly evaluate χ^2.We first fit all the data with the model M(t), and flag thepoints that deviate more than 5σ from the best model.We then refit only the non-flagged data using the same model,and update the flags of all the original data points, including the ones classified as outliers before, on the basis of the new best model and the same 5σ criterion.We iterate this procedure until the flaggeddata are converged. While this process gives a more robust evaluation of χ^2, it may also erase the signature of the ringed planet; thus we visually check all the light curves in any case not to miss the real ringed planets. The noise variance σ^2 is estimated for each transit light curveby fitting the out-of-transit light curve with a fourth-order polynomial,and calculating the variance of the residuals. Flare-like events are excluded from theestimation of the noise variance.§.§ Calculation of Δ^2_ sim(p) and Δ^2_ sim, maxSince the parameter space p for a ringed planet is very vast,we wish to reduce the volume we need to search with simulations as much as possible. First we show that Δ^2_ sim(p) does not depend on P and a/R_⋆ withother parameters fixed including limb darkening parameters q_1, q_2,the transit impact parameter b, planet-to-star radius ratio R_ p/R_⋆,inner and outer ring radii relative to the planetary radius r_ in/p and r_ out/p, , the direction of the ring (θ, ϕ), and a shading parameter T. This property becomes apparent by rewriting Δ^2_ sim(p) into the followingintegral form approximately, assuming that the sampling rate (t_ bin) is sufficiently small compared to the duration T_ dur:Δ_sim^2(p) ≃∫_-T_ dur/2^T_ dur/2δ _ sim^2(t,p) dt/∫_-T_ dur/2^T_ dur/2 dt = ∫_-1/2^1/2δ _ sim^2(T_ durt',p) dt'/∫_-1/2^1/2 dt' = ∫_-1/2^1/2δ̅_ sim^2(t',p) dt',whereδ̅_ sim(t, p) ≡δ_ sim(T_ dur t,p) and the origin of time is shifted to the transit center. Assuming that the values of q_1, q_2, b, R_ p/R_⋆, r_ out/p, r_ in/p, θ, ϕ, and T are fixed,δ̅(t, p) defined above does not depend on T_ dur explicitly. Therefore, Δ_sim^2(p) given by Equation (<ref>)does not depend on the time scale of the transit T_ dur, which is determined by P and a/R_⋆, andwe do not need to simulate the dependence of Δ_ sim^2(p) on these two parmeters.To constrain the parameter space further, we use the observed transit depth.Here we also assume that the values of q_1, q_2, b, T,r_ in/p, and the ring direction are fixed and that R_ p/R_⋆ and r_ out/p are the only free parameters. Then, the constraint on the observed transit depth leaves only one degree of freedom, specified by contours in the R_ p/R_⋆-r_ out/p plane; henceforth we rewrite Δ_ sim^2(p) as Δ^2_ sim(r_ out/p) to explicitly show this dependence.To compute the relation Δ^2_ sim(r_ out/p) for a given transit depth,we first calculate the value of Δ^2_ sim and the transit depthsfor a sufficient number of points in the (r_ out/p, R_ p/R_⋆) plane. The necessary number of points depends onthe fiducial model, and, in our simulation,we prepare about two hundred points for each model in Table <ref>.For any r_ out/p, the observed transit depthuniquely translates into R_ p/R_⋆ by the interpolationin the R_ p/R_⋆-transit depth plane, becausethe transit depth is a monotonically increasing function of theR_ p/R_⋆. Thus, the given value of r_ out/pis uniquely related to Δ^2_ sim given thetransit depth. By repeating this procedure for many different values of r_ out/p,we can compute the relation Δ^2_ sim(r_ out/p).We note that once a sufficient number of interpolated lines are prepared,one transit depth determines the relation Δ^2_ sim(r_ out/p) without additionalcalculation.Figure <ref> shows Δ^2_ sim(r_ out/p) curves created in this way, for 4×4 = 16 different sets of impact parameters, ring directions, and transit depths. The four sets of p adopted here (model I ∼ model IV)are summarized in Table <ref>, and four transit depthsare chosen to be 0.001, 0.005, 0.01, and 0.05.We fix T = 1 and r_ in/p = 1 in all of these simulations. Here we simulate Δ^2_ sim(r_ out/p) only for1 ≤ r_ out/p≤ r_ eq, where r_ eq is the value of r_ out/p for whichthe minor axis of the sky-projected outer ring is equal to the planetary radius, computed for each model. This is because the value of Δ^2_ sim(r_ out/p) shows no R_ p/R_⋆ dependence beyond r_ eq,when T = 1 and r_ in/p = 1 are adopted; if this is the case, the planetary disk is within the outer disk and the transit depth is solely determined by the latter.In this paper, we only use the observed constraint on the transit depth. However, this is just for simplicity and we can certainly take into account the constraints on other parameters including b, q_1, and q_2 from themorphology of the observed transit light curve(e.g. egress and ingress durations).Such constraints further restrict the ring models that could beconsistent with the observed light curve and thus help more elaborate discussions on the ring parameters, which we leave to future works.§ DERIVATION OF THE UPPER LIMIT OF R_ OUT/P: CASE OF KOI-1466.01If a system is classified into group (B), the ring models withΔ^2_ thr < Δ_ sim^2(r_ out/p) are excluded. The upper limits of r_ out/p thus obtained are summarizedin Sections 4 and 5. Here we describe how the limit is derived using the relationΔ^2_ sim(r_ out/p), taking KOI-1466.01 for example. The black and red lines in Figure <ref> are theoretically expected signals from the ringed planets(i.e., Δ^2_ sim(r_ out/p)) for model I ∼ model IV and for the transit depth of 0.0202 inferred from the observed data. The green line shows the threshold value of Δ^2_ thr that satisfies S/N =10,and the blue line shows the observed residual level Δ^2_ obs obtained by fitting the planet-alone model to the data. Here Δ^2_ obs<Δ^2_ thr, which means that no significant deviation from the planet-alone model is detected. In this case, we can in turn exclude the models above the green line, because any anomaly above this level should have been detected if present. In the case of the black solid line (model I), for example, the ring with r_ out/p>1.5 would have produced the anomaly with S/N>10,which is not detected in reality. Thus, we can set the upper limit ofr_ out/p<1.5 for model I. Note that the upper limits depend on the adopted parameter set; this situation is clearly illustrated in Figure <ref>,where similar limits cannot be derived for the other models.natexlab#1#1[Arnold & Schneider(2004)]2004A A...420.1153A Arnold, L., & Schneider, J. 2004, , 420, 1153[Barnes & Fortney(2004)]2004ApJ...616.1193B Barnes, J. W., & Fortney, J. J. 2004, , 616, 1193[Barnes et al.(2011)Barnes, Linscott, & Shporer]2011ApJS..197...10B Barnes, J. W., Linscott, E., & Shporer, A. 2011, , 197, 10[Benomar et al.(2014)Benomar, Masuda, Shibahashi, & Suto]2014PASJ...66...94B Benomar, O., Masuda, K., Shibahashi, H., & Suto, Y. 2014, , 66, 94[Brogi et al.(2016)Brogi, de Kok, Albrecht, Snellen, Birkby, & Schwarz]2016ApJ...817..106B Brogi, M., de Kok, R. J., Albrecht, S., et al. 2016, , 817, 106[Brown et al.(2001)Brown, Charbonneau, Gilliland, Noyes, & Burrows]2001ApJ...552..699B Brown, T. M., Charbonneau, D., Gilliland, R. L., Noyes, R. W., & Burrows, A. 2001, , 552, 699[Carter & Winn(2010)]2010ApJ...716..850C Carter, J. A., & Winn, J. N. 2010, , 716, 850[Clanton & Gaudi(2016)]clanton2016synthesizing Clanton, C., & Gaudi, B. S. 2016, The Astrophysical Journal, 819, 125[Coughlin et al.(2016)Coughlin, Mullally, Thompson, Rowe, Burke, Latham, Batalha, Ofir, Quarles, Henze, Wolfgang, Caldwell, Bryson, Shporer, Catanzarite, Akeson, Barclay, Borucki, Boyajian, Campbell, Christiansen, Girouard, Haas, Howell, Huber, Jenkins, Li, Patil-Sabale, Quintana, Ramirez, Seader, Smith, Tenenbaum, Twicken, & Zamudio]2016ApJS..224...12C Coughlin, J. L., Mullally, F., Thompson, S. E., et al. 2016, , 224, 12[Cox(2000)]2000asqu.book.....C Cox, A. N. 2000, Allen's astrophysical quantities[Dotter et al.(2008)Dotter, Chaboyer, Jevremović, Kostov, Baron, & Ferguson]dotter2008dartmouth Dotter, A., Chaboyer, B., Jevremović, D., et al. 2008, The Astrophysical Journal Supplement Series, 178, 89[Dressing & Charbonneau(2013)]dressing2013occurrence Dressing, C. D., & Charbonneau, D. 2013, The Astrophysical Journal, 767, 95[Dyudina et al.(2005)Dyudina, Sackett, Bayliss, Seager, Porco, Throop, & Dones]2005ApJ...618..973D Dyudina, U. A., Sackett, P. D., Bayliss, D. D. R., et al. 2005, , 618, 973[Hayashi(1981)]1981PThPS..70...35H Hayashi, C. 1981, Progress of Theoretical Physics Supplement, 70, 35[Heising et al.(2015)Heising, Marcy, & Schlichting]2015ApJ...814...81H Heising, M. Z., Marcy, G. W., & Schlichting, H. E. 2015, , 814, 81[Huber et al.(2013)Huber, Chaplin, Christensen-Dalsgaard, Gilliland, Kjeldsen, Buchhave, Fischer, Lissauer, Rowe, Sanchis-Ojeda, Basu, Handberg, Hekker, Howard, Isaacson, Karoff, Latham, Lund, Lundkvist, Marcy, Miglio, Silva Aguirre, Stello, Arentoft, Barclay, Bedding, Burke, Christiansen, Elsworth, Haas, Kawaler, Metcalfe, Mullally, & Thompson]2013ApJ...767..127H Huber, D., Chaplin, W. J., Christensen-Dalsgaard, J., et al. 2013, , 767, 127[Janson et al.(2012)Janson, Hormuth, Bergfors, Brandner, Hippler, Daemgen, Kudryavtseva, Schmalzl, Schnupp, & Henning]janson2012astralux Janson, M., Hormuth, F., Bergfors, C., et al. 2012, The Astrophysical Journal, 754, 44[Kenworthy & Mamajek(2015)]2015ApJ...800..126K Kenworthy, M. A., & Mamajek, E. E. 2015, , 800, 126[Kipping(2010)]2010MNRAS.408.1758K Kipping, D. M. 2010, , 408, 1758[Kipping(2013)]2013MNRAS.435.2152K —. 2013, , 435, 2152[Lainey et al.(2012)Lainey, Karatekin, Desmars, Charnoz, Arlot, Emelyanov, Le Poncin-Lafitte, Mathis, Remus, Tobie, & Zahn]2012ApJ...752...14L Lainey, V., Karatekin, Ö., Desmars, J., et al. 2012, , 752, 14[Maeder(2009)]2009pfer.book.....M Maeder, A. 2009, Physics, Formation and Evolution of Rotating Stars, doi:10.1007/978-3-540-76949-1[Mandel & Agol(2002)]2002ApJ...580L.171M Mandel, K., & Agol, E. 2002, , 580, L171[Markwardt(2009)]2009ASPC..411..251M Markwardt, C. B. 2009, in Astronomical Society of the Pacific Conference Series, Vol. 411, Astronomical Data Analysis Software and Systems XVIII, ed. D. A. Bohlender, D. Durand, & P. Dowler, 251[Masuda(2015)]2015ApJ...805...28M Masuda, K. 2015, , 805, 28[Morton(2012)]2012ApJ...761....6M Morton, T. D. 2012, , 761, 6[Morton(2015)]2015ascl.soft03011M —. 2015, VESPA: False positive probabilities calculator, Astrophysics Source Code Library, ascl:1503.011[Ohta et al.(2005)Ohta, Taruya, & Suto]2005ApJ...622.1118O Ohta, Y., Taruya, A., & Suto, Y. 2005, , 622, 1118[Ohta et al.(2009)Ohta, Taruya, & Suto]2009ApJ...690....1O —. 2009, , 690, 1[Parviainen(2015)]2015MNRAS.450.3233P Parviainen, H. 2015, , 450, 3233[Queloz et al.(2000)Queloz, Eggenberger, Mayor, Perrier, Beuzit, Naef, Sivan, & Udry]2000A A...359L..13Q Queloz, D., Eggenberger, A., Mayor, M., et al. 2000, , 359, L13[Rappaport et al.(2014)Rappaport, Swift, Levine, Joss, Sanchis-Ojeda, Barclay, Still, Handler, Oláh, Muirhead, Huber, & Vida]2014ApJ...788..114R Rappaport, S., Swift, J., Levine, A., et al. 2014, , 788, 114[Sanchis-Ojeda et al.(2011)Sanchis-Ojeda, Winn, Holman, Carter, Osip, & Fuentes]2011ApJ...733..127S Sanchis-Ojeda, R., Winn, J. N., Holman, M. J., et al. 2011, , 733, 127[Santos et al.(2015)Santos, Martins, Boué, Correia, Oshagh, Figueira, Santerne, Sousa, Melo, Montalto, Boisse, Ehrenreich, Lovis, Pepe, Udry, & Garcia Munoz]2015A A...583A..50S Santos, N. C., Martins, J. H. C., Boué, G., et al. 2015, , 583, A50[Schlichting & Chang(2011)]2011ApJ...734..117S Schlichting, H. E., & Chang, P. 2011, , 734, 117[Schneider(1999)]1999CRASB.327..621S Schneider, J. 1999, Academie des Sciences Paris Comptes Rendus Serie B Sciences Physiques, 327, 621[Schwarz et al.(2016)Schwarz, Ginski, de Kok, Snellen, Brogi, & Birkby]2016arXiv160700012S Schwarz, H., Ginski, C., de Kok, R. J., et al. 2016, ArXiv e-prints, arXiv:1607.00012[Snellen et al.(2014)Snellen, Brandl, de Kok, Brogi, Birkby, & Schwarz]2014Natur.509...63S Snellen, I. A. G., Brandl, B. R., de Kok, R. J., et al. 2014, , 509, 63[Uehara et al.(2016)Uehara, Kawahara, Masuda, Yamada, & Aizawa]2016ApJ...822....2U Uehara, S., Kawahara, H., Masuda, K., Yamada, S., & Aizawa, M. 2016, , 822, 2[Wang et al.(2015)Wang, Fischer, Barclay, Picard, Ma, Bowler, Schmitt, Boyajian, Jek, LaCourse, Baranec, Riddle, Law, Lintott, Schawinski, Simister, Grégoire, Babin, Poile, Jacobs, Jebson, Omohundro, Schwengeler, Sejpka, Terentev, Gagliano, Paakkonen, Otnes Berge, Winarski, Green, Schmitt, Kristiansen, & Hoekstra]2015ApJ...815..127W Wang, J., Fischer, D. A., Barclay, T., et al. 2015, , 815, 127[Zhou et al.(2016)Zhou, Apai, Schneider, Marley, & Showman]2016ApJ...818..176Z Zhou, Y., Apai, D., Schneider, G. H., Marley, M. S., & Showman, A. P. 2016, , 818, 176[Zuluaga et al.(2015)Zuluaga, Kipping, Sucerquia, & Alvarado]2015ApJ...803L..14Z Zuluaga, J. I., Kipping, D. M., Sucerquia, M., & Alvarado, J. A. 2015, , 803, L14
http://arxiv.org/abs/1702.08252v1
{ "authors": [ "Masataka Aizawa", "Sho Uehara", "Kento Masuda", "Hajime Kawahara", "Yasushi Suto" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170227121512", "title": "Towards Detection of Exoplanetary Rings Via Transit Photometry: Methodology and a Possible Candidate" }
http://arxiv.org/abs/1702.08373v3
{ "authors": [ "Anita Liebenau", "Nick Wormald" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170227165255", "title": "Asymptotic enumeration of graphs by degree sequence, and the degree sequence of a random graph" }
itodo * pics/jacob.biamonte@qubit.orgDeepQuantum.AI Deep Quantum Labs Skolkovo Institute of Science and Technology, Skoltech Building 3, Moscow Russia 143026 mauro.faccin@uclouvain.beICTEAM, Université Catholique de Louvain, Euler Building 4, Avenue Lemaitre, B-1348 Louvain-la-Neuve, Belgiummdedomenico@fbk.euFondazione Bruno Kessler, Via Sommarive 18, 38123 Povo (TN), ItalyComplex Networks from Classical to Quantum Manlio De Domenico==========================================Recent progress in applying complex network theory to problems in quantum information has resulted in a beneficial crossover. Complex network methods have successfully been applied to transport and entanglement models while information physics is setting the stage for a theory of complex systems with quantum information-inspired methods. Novel quantum induced effects have been predicted in random graphs—where edges represent entangled links—and quantum computer algorithms have been proposed to offer enhancement for several network problems. Here we review the results at the cutting edge, pinpointing the similarities and the differences found at the intersection of these two fields. Quantum mechanics has long been predicted to help solve computational problems in physics <cit.>, chemistry <cit.>, and machine learning <cit.> and to offer quantum security enhancement in communications <cit.>, including a quantum secure Internet <cit.>. Rapid experimental progress has pushed quantum computing and communication devices into truly data-intensive domains, where even the classical network describing a quantum system can exhibit complex features, giving rise to what appears as a paradigm shift needed to face a fundamental type of complexity <cit.>.Methods originating in complex networks—traditionally based on statistical mechanics—are now being generalized to the quantum domain in order to address these new quantum complexity challenges.Building on several fundamental discoveries <cit.>, complex network theory has demonstrated that many (non-quantum) systems exhibit similarities in their complex features <cit.>, in the organization of their structure and dynamics <cit.>, the controllability of their constituents <cit.> and their resilience to structural and dynamical perturbations <cit.>. Certain quantum systems have been shown to indeed exhibit complex features related to classical systems, as well as novel mechanisms and principles that interrelate complex features in quantum systems <cit.>.Two types of quantum networks have been of primary focus in the series of pioneering results we review.The first consists of quantum systems whose connections are represented by entangled states <cit.>.These quantum networks are used in secure quantum communication systems.The second area of focus consists of networks of quantum systems, such as atoms or superconducting quantum electronics, whose connections are physical <cit.>.Such systems are used to develop quantum-enhanced algorithms or quantum information transport systems, both modeled by quantum walks on complex networks.At a fundamental level, the two types of quantum networks are described by quantum information theory, allowing one to extend the spectrum of network descriptors—such as ranking indicators, similarity and correlation measures—inside the quantum domain. Interestingly, the same tools can then be appropriately modified to apply to traditional complex networks, suggesting the existence of a framework—network information theory—suitable for application to both classical and quantum networked systems <cit.>. This bidirectional cross-over is carving out a coherent path forward built fundamentally on the intersection of these two fields (see Fig. <ref>). Several quantum effects are still outside of the predictive range of applicability of complex network theory.Future work should build on recent breakthroughs and head towards a new theory of complex networks which augments the current statistical mechanics approach to complex networks, with a theory built fundamentally on quantum mechanics.Such a unified path forward appears to be through the language of information theory.Here, we make an effort to review some of the crucial steps towards the creation of a network theory based fundamentally on quantum effects. Therefore, we do not cover several topics that, nevertheless, deserve to be mentioned as part of the field. These include, in no particular order, quantum gravity theories based on complex networks <cit.>, synchronization in and on quantum networks <cit.>, quantum random circuits <cit.>, classical spin models and quantum statistics successfully used in complex network theory <cit.> (see <cit.> for a thorough review).§ NETWORKS IN QUANTUM PHYSICS VS COMPLEXITYNetwork and graph theory fundamentally arises in nearly all aspects of quantum information and computation. As is the case with traditional network science, not all networks exhibit what is considered as `complexity'.Here we will recall briefly the basic definition of a network and mention several areas where network theory arises in quantum computation and contrast this with the concept of a complex network. A network is an abstract representation of relationships (encoded by edges) between units (encoded by nodes) of a complex system. Edges can be directed, i.e. they can represent information incoming to or outgoing from a node and, in general, they can be weighted by real numbers. The number of incoming, outgoing and total edges is known as incoming, outgoing and total degree of a node, respectively. The sum of the corresponding weights defines the incoming, outgoing and total strength of that node, respectively. Networks are often characterized by how node degree and strength are distributed and correlated. Systems modeled by uncorrelated networks with homogeneous degree distribution are known as Erdos-Renyi networks, whereas systems with power-law degree distribution are known as scale-free networks. We refer to <cit.> for reviews of network concepts and models.The use of various aspects of graph and network theory can be found in all aspects of quantum theory, yet not all networks are complex.The commonly considered networks include (i) Quantum spins arranged on graphs; (ii) quantum random walks on graphs; (iii) Quantum circuits/networks; (iv) Superconducting quantum (electrical) circuits; (v) Tensor network states; (vi) Quantum graph states, etc.Although the idea of a complex network is not defined in a strict sense, the definition is typically that of a network which exhibits an emergent property, such as a non-trivial distribution in node degree.This is in contrast to graph theory, which applies graph theory or tensor network reasoning to deduce and determine properties of quantum systems.Here we will focus on topics in quantum systems which are known to be connected with the same sort of complexity considered in complex networks.§ QUANTUM NETWORKS BASED ON ENTANGLED STATES To define quantum networks based on entangled states, let us start from the state of each i^th qubit, written without loss of generality, as|ψ_i⟩ = cos(α_i)|0⟩ + e^-i θ_isin(α_I)|1⟩,with |0⟩ and |1⟩ the preferred or `computational' basis. The qubit is in a pure, coherent superposition of the two basis states and any measurement in this same basis will cause the state to collapse onto |0⟩ or |1⟩, with probability cos^2(α_i) and sin^2(α_I), respectively. Let us consider a quantum system with two qubits, i.e. i=1 and 2. The basis of this system is given by the so called, tensor product, of the two basis states: |00⟩, |01⟩, |10⟩ and |11⟩. If the two qubits are not entangled, i.e. their states are independent from each other, then the state of the overall system can be written as e.g. |ψ_12⟩=|ψ_1⟩⊗|ψ_2⟩, whereas this is not possible if the two qubits are entangled. A generalization of this description to the case of mixed states is obtained in terms of the non-negative density matrix ρ; a unit trace Hermitian operator representing the state of the system as an ensemble of (unknown) pure states.Instead of distributing entanglement on regular graphs, such as uniform lattices typically studied in condensed matter physics, it has been shown that it is possible to tune the amount of entanglement between two nodes in such a way that it equals the probability to have a link in (classical) Erdos-Renyi graphs <cit.>. Such random graphs can be defined by the family of networks G(N,p), where N is the number of nodes and p the probability to find a link between any two nodes. The probability scales with the size of the network following a power law p∝ N^-z, with z≥ 0. In classical network theory, there exists a critical value for the probability p_c(N) for which, if p>p_c(N) a given subgraph of n nodes and l links has higher probability to be observed. The classical result is that this critical probability scales with N as p_c(N)∝ N^-n/l.Acin, Cirac, and Lewenstein <cit.> formulated an elegant extension of this picture to the quantum realm by replacing each link with an entangled pair of particles, where the probability p_i,j=p that the link exists between nodes i and j is substituted by a quantum state ρ_i,j:=ρ of two qubits, one at each node(see Fig. <ref>). One can build a quantum network where each node consists of N-1 qubits which are entangled, in pairs, with qubits of other nodes. However, in this case, although the connections are identical and pure they encode non-maximally entangled pairs.For pure states of qubits the density matrix is ρ=|ϕ⟩⟨ϕ|, with|ϕ⟩ = 1/√(2)( √(2-p)|00⟩ + √(p)|11⟩).Here, 0≤ p≤ 1 quantifies the entanglement of links and the state of the overall quantum random graph can be denoted by |G(N,p)⟩. If each link, i.e. each entangled pair, attempts to convert its state to the maximally entangled one (p = 1/2)through local operations and classical communication (LOCC), the optimal probability of successful conversion is exactly p. It follows that the fraction of existing entangled states converted to maximally entangled ones by LOCC corresponds to the probability of having a link between nodes in the corresponding classical random network <cit.>. By varying the value of the parameter z, i.e. how the critical probability scales with system size, it is possible to control the number and type of subgraphs present in a quantum network of N nodes. This is useful to create special multipartite states, such as the Greenberger-Horne-Zeilinger state which exhibits non-classical correlations <cit.>. The striking result is that it is possible to obtain, with probability approaching unity, a quantum state with the topology of any finite subgraph for N approaching infinity and z=-2.This bridge between complex network theory and quantum theory provides a powerful tool to investigate the critical properties of a quantum system. For instance, in the case of regular lattices, it has been shown that the probability p_opt to establish a perfect quantum channel between the nodes can be mapped to the probability of distributing links among each pair of nodes in the lattice <cit.>, a scenario that can be studied using the well-established bond-percolation theory from statistical physics. This result allows one to calculate the critical probability above which the system will exhibit an infinite connected cluster and, in the case of qubits, it has been shown that the probability of having an entangled path with infinite length—i.e., an infinite sequence of entangled states connecting an infinite number of qubits—is unity.However, for product states this probability is zero, denoting the existence of a sharp transition between these two scenarios. However, local measurements based on this approach, called classical entanglement percolation (CEP), are not optimal, in general, to generate maximally entangled states: CEP is not even asymptotically optimal for two-dimensional lattices and new quantum protocols based on quantum entanglement percolation have to be used instead <cit.>. A novel critical phenomenon, defining an entanglement phase transition, emerges from this new strategy, where the critical parameter is the degree of entanglement required to be distributed in order to establish a quantum channel with probability that does not decay exponentially with the size of the system, at variance with CEP. This type of enhancement with respect to the classical case has been reported for different network topologies, such as Erdos-Renyi, scale-free and small-world networks <cit.>.Unexpected quantum effects emerging from network effects have been reported. Cardillo et al. show that nodes which store the largest amount of information are the ones with intermediate connectivity and not the hubs, breaking down the usual hierarchical picture of classical networks <cit.>. More recently, Carvacho et al. measured the emergence of special quantum correlations, named non-bilocal, correlating distant qubits by means of several intermediate, typically independent, sources, and providing evidence for violation of local causality in a quantum network <cit.>.The static entangled states providing the network connectivity described here, will be replaced in the next section by dynamical processes on networked quantum systems.§ QUANTUM NETWORKS BASED ON PHYSICAL CONNECTIVITY Another wide area where network concepts have found applicability consists of quantum systems physically interconnected, such as atoms or superconducting quantum electronics <cit.>.These types of systems provide fertile ground where quantum algorithms are tested <cit.> and quantum information transport systems are studied <cit.>.Typical modeling approaches are based on so called `quantum walks' on complex networks, with recent studies showing that quantum information tasks, typically designed for simple topologies, retain performance in very disordered structures <cit.>.Stochastic (non-quantum) walks are also a central model in complex network theory—see the review <cit.>. Any quantum process can be viewed as a single particle walk on a graph.Single-particle quantum walks represent a universal model of quantum computation—-meaning that any algorithm for a quantum computer can be translated into a quantum walk on a graph—and, additionally, quantum walks have been widely studied in the realm of quantum search on graphs, in both continuous and discrete time via coined walks (see e.g. <cit.>—in particular the graph optimality results <cit.>).The computational advantages of quantum versus stochastic random walk based algorithms has attracted wide interest with typical focus being on general graphs which consequently do not exhibit complex features. However, many works have compared properties of stochastic <cit.> and quantum random walks <cit.> on complex networks <cit.>.Network topology has further been shown as a means to direct transport by adding complex numbers—while maintaining Hermiticity—to the networks adjacency matrix in `chiral quantum walks' <cit.> (note that chiral walks were realized experimentally in <cit.>).Open system walks which mix stochastic and quantum effects in `open' evolutions <cit.> have aided in the study of quantum effects in biological exciton transport (again, modeled as a quantum walk) and developments in a quantum version of Google's PageRank <cit.> has been seen, providing a practical solution to overcome the degeneracy issues affecting the classical version and enhancing node ranking in large networks.Recently, Faccin et al. have analytically solved a model which shed light on some key differences between stochastic and quantum walks on complex networks <cit.>.These differences push forward a general understanding which can lead to a theory explaining novel complex features in quantum systems.Quantum walks on complex networks represent both a practical model of transport <cit.> as well as an interesting stage of comparison between the quantum and stochastic cases.As a closed quantum system exhibits fluctuations in the probabilities in time, typically a long time average is considered.Physically, this is the best approximation one can hope for, provided that there is no knowledge of when the walk started.In this case, the probability to find a quantum walker in the i-th node is given by p(i) = lim_T→∞1/T∫_0^T dt |⟨i|U_t|0||⟩^2,where |0⟩ is the initial state and U_t=e^- Qtis the unitary evolution operator defined by the quantum generator Q. Interference between subspaces of different energy vanish in the long time average so we obtain an expression for the probability ( P_Q )_i in terms of the energy eigen-space projectors Π_j of the Hamiltonian H_Q, ( P_Q )_i = ∑_j iΠ_j ρ (0) Π_ji . Here Π_j = ∑_k|ϕ_j^k⟩⟨ϕ_j^k| projects onto the subspace spanned by the eigenvalues |ϕ_j^k⟩ of H_Q corresponding to the same eigenvalue λ_j. Quantum-enhanced page-rankingThe non-symmetric adjacency matrix representing the directed connectivity of the World Wide Web, a.k.a. the Google matrix G, satisfies the Perron-Frobenius theorem <cit.> and hence there is a maximal eigenvalue corresponding to an eigenvector of positive entries Gp = p. The eigenvector p corresponds to the limiting distribution of occupation probabilities of a random web surfer—it represents a unique attractor for the dynamics independently of the initial state. The vector p is known as the Page-Rank. [a dumping or teleportation factor is often included in the computation in order to assure the Perron-Frobenius theorem satisfactibility.] Several recent studies embed G into a quantum system and consider quantum versions of Google's Page-Rank <cit.>. Garnerone et al. <cit.> relied on an adiabatic quantum algorithm to compute the Page-Rank of a given directed network, whereas Burillo et al. <cit.> rely on a mixture of unitary and dissipative evolution to define a ranking that converges faster than classical PageRank.The page-ranking vector p⃗ is an eigenvector of I-G corresponding to the zero eigenvalue (the lowest). This fact leads to a definition of a Hermitian operator which can play the role of a Hamiltonian, defined as:h^p = (I-G)^†(I-G),though highly non-local, its ground state represents the target Page-Rank which could be found by adiabatic quantum annealing into the ground state.Using a quantum computer to accelerate the calculation of various network properties has been considered widely <cit.>.As Page-Rank relies on finding the vector corresponding to the lowest eigenvalue of the Google matrix, the adiabatic algorithm opens the door up to accelerate network calculations using quantum computers.Directing transport by symmetry breaking in chiral walks Chiral quantum walks, introduced by Zimborás et al. in <cit.> and realized experimentally in <cit.>, append complex numbers to the adjacency matrix (playing the role of the system Hamiltonian) while still maintaining the Hermitian property <cit.>.These complex phases in many cases do not affect transfer probabilities: the theory explaining this finding was developed in <cit.>, without relying on approximations or averaging. The case of open systems has been investigated as well in <cit.>. In the scenarios where the addition of complex phases affects transfer probabilities, the underlying system breaks time-reversal symmetry and, consequently, the probability flow into the quantum system is biased. This fact enables directed state transfer without requiring a biased (or non-local) distribution in the initial states, or coupling to an environment.When the underlying graph is bipartite (e.g. a graph whose vertices can be divided into two disjoint sets such as a square lattice), time-reversal symmetry in the transport probabilities can not be broken.Transport suppression is indeed possible however <cit.>.Bipartite graphs include trees, linear chains and generally, graphs with only even cycles.These results point to a subtle interplay between the topology of the underlying graph, giving rise to a new challenge for dynamical control of probability transfer when considering walks on complex networks <cit.>. Open quantum walksThe area of open quantum systems <cit.> studies noise and its effects in quantum systems.The adiabatic version of Page-Rank <cit.> uses a quantum stochastic quantum walk as proposed by <cit.> (see also <cit.> for studies on open walks). Quantum stochastic walks are defined by a quantum walk undergoing dissipative dynamics. The latter follows the quantum master equation in the Lindbladian form:ρ̇ = L[ρ]= -[H,ρ] + ∑_k L_kρ L_k^† - 1/2{ L_k^† L_k,ρ}where L_k represents a jump operator while [·,·] and {·,·} are commutator and anti-commutator respectively. The network topology is embedded by choosing H equal to the adjacency matrix of the symmetrized network and L_k = L_ij = √(G_ij)|i⟩⟨j|. In this picture, node ranking is defined by an activity vector α computed at the steady state ρ^ss. Paparo et al. <cit.> introduced a Szegedy type of Markov chain quantization <cit.> of the random walk. In order to quantize the Markov chain defined by the Google matrix G of N nodes, one introduces a Hilbert spaceℋ={|i⟩_1|j⟩_2, i,j ∈ [0,N]} and the superposition of outgoing edges from node i:|ψ_i⟩ = |i⟩_1 ⊗∑_k √(G_ki)|k⟩_2andΠ = ∑_k |ψ_k⟩⟨ψ_k|. Each step of the quantum walk U is defined by a coin flip 2Π-1 and a swap operation S which ensures unitarity <cit.>U = S(2Π-1)the swap operator is S=∑_ij|ij⟩⟨ji|.In the case of quantum Page-Rank, this is set to the instantaneous probability P(i,t) of finding the walker at node i at the time-step t.To obtain a fixed value for the quantum Page-Rank a time average is calculated as long as with its variance as a measure for quantum fluctuations.Another approach <cit.> involves defining a Markov quantum evolution similar to Eq. (<ref>), with a tuning parameter α:ρ̇= -(1-α) [H,ρ]+α[∑_k L_kρ L_k^† -1/2{L_k^† L_k,ρ}]where the Hamiltonian H is the symmeterized adjacency matrix andL_k = L_ij = √(G_ij)|i⟩⟨i| represent the jump operators which consider the directness of the network. With this definition, for values of α∈ (0,1], a stationary state is guaranteed. In this case, for α=0 we revert to the unitary evolution while for α=1 we revert to the stochastic case. The authors <cit.> show how this definition of quantum Page-Rank resolves problems of degeneracy in the classical Page-Rank definition, enhances the importance of secondary hubs and, for certain values of α, the algorithm exhibits faster convergence. § TOWARDS UNIFIED ANALYSIS OF NETWORK COMPLEXITY The interaction between network science and quantum information science has led to the development of theoretical and computational tools that benefitted from both fields. On the one hand, quantum-inspired tools, such as information entropies and quantum distance measures, have been successfully applied to practical problems concerning classical complex networks <cit.>. On the other hand, classical network descriptors have been ported to the quantum realm to gain better insights about the structure and the dynamics of networked quantum systems <cit.>. The cross-pollination between the two fields—including, among others, quantum statistics for modeling the dynamics of classical networks and their geometry <cit.> (see also Ref. <cit.> and references therein)—is still ongoing with vibrant future research opportunities. Here we briefly review the advances concerning quantum-inspired entropic measures for networks and network-inspired measures for quantum systems.Information entropy of classical networks Historically, the concept of entropy has been successfully used to quantify the complexity of many systems <cit.>. Recently, the possibility of using quantum entropy and other quantum information theoretical measures has been explored by the community of network scientists. For classical complex networks, von Neumann's entropy has been applied over one decade ago <cit.>. The combinatorial Laplacian matrix L, obtained from the adjacency matrix representing the network, is rescaled by the number of edges in the network. The normalization of the matrix L guarantees that the corresponding eigenvalues are non-negative and sum up to 1—in order to be interpreted as probabilities <cit.>—and some other properties which makes the resulting object similar to a quantum density matrix ρ. Network entropy is defined according to von Neumann quantum entropy asS(ρ)=-(ρlog_2ρ).By exploiting the eigen-decomposition of the Laplacian matrix, it can be shown that this entropy corresponds to the Shannon entropy of the eigenvalue spectrum of ρ. This entropy has been generalized to the case of multilayer systems <cit.>, composite networks where units exhibit different types of relationships that are generally modeled as different layers (see <cit.> for a thorough review).It has been recently shown that the von Neumann entropy calculated from the rescaled Laplacian does not satisfy the sub-additivity property in some circumstances <cit.>.This undesirable feature can be addressed by means of a more grounded definition <cit.>, whose rationale is to measure the entropy of a network by exploiting how information diffuses through its topology. Information diffusion in this context is governed by the equationψ̇_i(t) = -∑_j=1^NL_jiψ_j(t),with ψ_i(t) the amount of information in node i at time t. The solution of this diffusion equation is given, in vector notation, by ψ(t)=exp(-L t)ψ(0), whose normalized propagator is used to define the density matrix asρ = e^-τ L/(e^-τ L),where time plays the role of a resolution parameter allowing one to probe entropy at different scales <cit.>. A similar approach, involving a modified Laplacian matrix, has been recently used for revealing the mesoscale structure of complex directed networks <cit.>. This quantum-inspired framework provides a powerful basis to develop an information theory of complex networks, with direct applications in classical network science, such as system comparison.Comparing classical networksA known problem in network science is to compare two networks, without relying on a specific subset of indicators. Network information entropy allows one to introduce relative entropies such as the Kullback-Leibler divergence, to compare two networks with density matrices ρ and σ respectively:𝒟(ρ||σ)=[ρ(log_2ρ-log_2σ)].By exploiting the well-known classical result that the minimization of Kullback-Leibler divergence between a reference distribution and its parametric model corresponds to the maximization of the likelihood, it has been shown that in a network context this allows one to define the network log-likelihood bylog_2ℒ(Θ)=[ρlog_2σ(Θ)].The introduction of network likelihood opens the door to a variety of applications in statistical inference and model selection, based on concepts such as the Fisher information matrix, Akaike and Bayesian information criteria, and minimum description length, to cite some of them <cit.>.This new framework has been used to compare networks for several purposes. For instance, in the case of pairs of networks, the graphs are first merged by connecting each node from one network to any other node in the other network. Successively, continuous-time quantum walks are used to explore the composite system and the quantum Jensen-Shannon divergence between the evolution of two walks is calculated. This divergence, that is a measure of (dis)similarity, is shown to be maximum when the two original networks are isomorphic <cit.>.The square root of quantum Jensen-Shannon divergence has the nice property of defining a metric, allowing one to define a distance between networks. If ρ and σ are two density matrices corresponding to two networks with N nodes, their Jensen-Shannon divergence is defined by𝒟_JS(ρ||σ) = 1/2𝒟_KL(ρ||μ) + 1/2𝒟_KL(σ||μ)= S(μ) - 1/2[S(ρ)+S(σ)],that is the difference between the entropy of the mixture μ=1/2(ρ+σ) and the semi-sum of the entropies of the original systems.In the context of multilayer systems, this measure has been used to quantify the distance between layers of a multiplex network, cluster and aggregate them appropriately in order to reduce its structural complexity <cit.>.These ideas have quickly found direct applications in biology. In genetic molecular systems, such as the ones described by gene-protein interactions, layers might encode different relationships among molecules—functional, e.g. additive, suppressive and other types of association, or physical, e.g. co-localization or direct interaction. The information-theoretic framework described here allowed to show that such systems exhibit a certain level of redundancy, larger than the one observed in man-made systems <cit.>, suggesting the existence of biological mechanisms devoted to maximize diversity of interactions.In computational neuroscience studies, the connectome of the nematode C. elegans—one of the most studied in the field because of its small size, with approximately 300 neuronal cells—has been mapped to a multiplex network where layers encode synaptic, gap junction, and neuromodulator interactions. Here, the analysis of reducibility revealed that the monoamine networks have a unique structure, with information complementary to that provided by neuropeptide networks <cit.>. The same analysis, applied to a multilayer functional representation of the human brain revealed, quantitatively, the importance of not disregarding or aggregating connectivity information for clinical classification of healthy and schizophrenic subjects <cit.>.In another application the Jensen-Shannon distance between layers of a multiplex system has been used to identify community-based associations in the human microbiome <cit.>, where the microbial network corresponding to each body site is represented by a layer in a multiplex network, in perfect agreement with biological expectation <cit.>.The (dis)similarity between networks has also been quantified by using a combination of the classical Jensen-Shannon distance and the concept of network node dispersion, measuring the heterogeneity of a graph in terms of connectivity distance among its nodes <cit.>. Degree distribution of quantum networks Nodes in a complex network have different roles and their influence on system dynamics can vary widely depending on their topological characteristics. One of the simpler (and widely applied) characteristics is the degree centrality, defined as the number of edges incident on that node. Many real world networks have been found to follow a widely heterogeneous distribution of degree values <cit.>. Several models, based on mechanisms like preferential attachment <cit.>, fitness <cit.> or constrained random wiring <cit.>, to mention some of them, were developed to reproduce degree distributions commonly observed in empirical systems. Despite the complexity of the linking pattern, the degree distribution of a network affects in a simple way the ongoing dynamics. In fact, it can be shown that the probability of finding a memoryless random walker at a given node of a symmetric network at the stationary state, is just proportional to the degree of such node <cit.>.In <cit.> the authors consider the relationship between the stochastic and the quantum version of such processes, with the ultimate goal of shedding light on the meaning of degree centrality in the case of quantum networks. They consider a stochastic evolution governed by the Laplacian matrix L_S=ℒD^-1, the stochastic generator that characterizes classical random walk dynamics and leads to an occupation probability proportional to node degree. In the quantum version, an hermitian generator is required and the authors proposed the symmetric Laplacian matrix L_Q=D^-1/2ℒ D^-1/2, generating a valid quantum walk that, however, does not lead to a stationary state, making difficult a direct comparison between classical and quantum versions of the dynamics. A common and useful workaround to this issue is to average the occupation probability over time, as in Eq. (<ref>). The generators of the two dynamics are spectrally similar (see Fig. <ref>) and share the same eigenvalues, while the eigenvectors are related by the transformation ϕ_i^C = D^-1/2ϕ_i^Q. As a consequence, if the system is in the ground state the average probability to find the walker on a node will be the same as in the classic case, which will depend solely on the degree of each node. For the cases in which the system is not in the ground state, it is possible to define a quantumness measureε = 1 - ⟨ϕ_0^Q|ρ_0 |ϕ_0^Q⟩,describing how far from the classical case the probability distribution of the quantum walker will be. In the case of uniformly distributed initial state ρ_0, this provides a measure for the heterogeneity of the degree distribution of a quantum network. Mesoscale organization of quantum systems Community detection, and in general mesoscopic structure detection, has been widely studied in the literature of classical complex networks <cit.>. While the definition of community “a subset of nodes tightly connected compared to what is expected” is in general ill defined and part of an ongoing debate, the number of proposed algorithms is incredibly high and still growing.The cross-pollination of community detection with quantum mechanics is in two levels. On the one hand, chronologically, the first attempt was to borrow tools from quantum mechanics for applications to classical systems <cit.>. On the other hand an algorithm to find communities in complex quantum systems was proposed in <cit.>.In <cit.> the authors propose a method for data clustering similar to kernel density estimators, in a quantum framework. The given data points are mapped to a Gaussian wave function and, supposing that the latter is an eigenstate for some time-independent Schrödingher equation:Hψ = [T + V(x)]ψ = E_0ψ ,the minimization of the potential V(x) leads to the desired clustering. An extension to dynamical quantum systems has been introduced in <cit.>. In this case the expectation values of the position operator evolves in time toward the closer minimum of the potential. This formulation can leverage the acceleration of graphics hardware. A method based on continuous-time quantum walks was proposed in <cit.>.Here a node affinity measure based on the response of node population density to link failure was given. If the population on two nodes changes in a similar manner after link removal, they are more likely to belong to the same community.A magnetic Laplacian, where a magnetic field is expected to traverse all cycles in the network, was used in <cit.>. With an approach similar to chiral walks, previously described,the symmetric Laplacian is amended with the original link directionality by a phase term e^±iθ, with θ being a parameter for the method, and used for community detection in directed networks, a longstanding problem in network science. In the case of quantum systems, partitioning in modular units has been often carried out on the basis of ad hoc considerations. In an effort to extend community detection to the quantum mechanics realm, Faccin et al. <cit.> introduced several closeness matrices inspired by different quantum quantities. Given the Hamiltonian H=∑_ij H_ij|i⟩⟨j| of the quantum system of interest, the authors consider a continuous-time random walk on the system topology. The first quantity is energy transport, porting to the quantum realm the concept applied in several classical algorithms where communities are interpreted as traps for the dynamical process. In this framework, two nodes are considered to be close if, on average, their in-between transport is high. If this average is computed over a short time period (compared toevolution time scales), then the closeness values are proportional to the Hamiltonian terms |H_ij|, providing a classical approach to community detection (see Fig. <ref>). A second quantity, also proposed as a closeness measure, is related to the average fidelity of the evolving process compared to the initial state. In this case the localization of eigenstates is the characteristic determining the closeness of two nodes.These methods augment current ad hoc approaches to partitioning nodes in quantum transport systems with enhanced methods based on community detection algorithms. § OUTLOOK IN QUANTUM NETWORK SCIENCEGeneralization of complex network methods to the quantum setting represents a foundational advancement required to understand complexity in physical systems.These methods represent a change of paradigm which bring several road blocks that must be faced. Centrally, the application domain of complex network methods to quantum physics must be expanded, whereas studies in the other direction, i.e. where methods from the quantum domain have now been ported to network science. From a foundational perspective, as networks necessarily represent physical systems, such systems are inherently governed by the laws of information physics.In fact, a research line is emerging now that seeks to quantify, in terms of implicit information processing capacity, networked systems, with several applications to social, technological and biological systems <cit.>.Although this interesting direction seems promising, yet it is comparably in its infancy, whereas it is still not known how to generalize classical concepts of complexity science to the quantum domain. Another relevant research direction, crucial for applications in classical network science, concerns the interplay between structure and dynamics, which is almost entirely unclear in the case of quantum networks.Although scale-free networks have been considered in the quantum setting <cit.>, the result is an—albeit interesting—toy model with theoretical predictions to be verified experimentally. Therefore, further advancement along this track is of central interest, because it might play a fundamental role in quantum enhanced technology and could lead to experiments devoted to test cross-disciplinary ideas in quantum and complexity science <cit.>.The quest for a theoretical foundation for quantum complex networks might have a deep impact in information and communication technology. While information processing in classical systems is well controlled, it is also rather limited and quantum computing might overcome such limitations <cit.>. However, given that such systems are more sensitive to interactions with the environment, they are also more exposed to errors than their classical counterparts. Quantum error-correcting codes allow us to store and manipulate quantum information in the presence of certain types of noise that, in this context, might perturb the quantum system causing effects similar to random failures in classical complex networks. The development of quantum error correction techniques that make quantum computing and quantum communication possible can not prescind from the study of `system resilience', a topic that found uncountable applications in classical network science <cit.>. Other types of perturbations that are natural for classical systems, such as targeted attacks of network hubs <cit.> or cascade-based attacks <cit.>, still have no clear quantum counterpart and their study, from both theoretical and experimental perspectives, will play a key role in the development of a quantum Internet <cit.>. In fact, it is tantalizing to think about how quantum hubs should be protected by the quantum counterpart of typical denial of service attacks. Continued advances in the theory of complexity in networked quantum systems will help address the challenges faced as quantum technologies scale up to commercially feasible products.Work towards a quantum theory of complex networked systems is already opening up novel avenues when facing contemporary complexity challenges. Author Contributions. JDB, MF and MDD designed and wrote this review. Competing Financial Interests. The authors declare no competing financial or non-financial interests.Data Availability. No dataset were generated or analysed during the current study.Acknowledgements. JDB acknowledges the Foundational Questions Institute (FQXi, under grant FQXi-RFP3-1322) for financial support. MF acknowledges the MOVE-IN fellowship program for financial support. MDD acknowledges financial support from the Spanish program Juan de la Cierva (IJCI-2014-20225). The authors thank Alex Arenas and Leonie Mueck for useful feedback, and the Institute for Quantum Computing at the University of Waterloo and the Perimeter Institute for Theoretical Physics for funding and allowing us to organize the first workshop on the intersection of these topics. Diagrams are courtesy of Lusa Zheglova (illustrator).100 url<#>1urlprefixURL2014arXiv1405.2831J authorJohnson, T. H., authorClark, S. R. & authorJaksch, D. titleWhat is a quantum simulator? journalEPJ Quantum Technology volume1, pages1–12 (year2014). <http://dx.doi.org/10.1140/epjqt10>.2010NatCh...2..106L authorLanyon, B. P. et al. titleTowards quantum chemistry on a quantum computer. journalNature Chemistry volume2, pages106–111 (year2010). 0905.0887.2016arXiv161109347B authorBiamonte, J. et al. titleQuantum machine learning. journalNature volume549, pages195–202 (year2017). 1611.09347.komar2014quantum authorKomar, P. et al. titleA quantum network of clocks. journalNature Physics volume10, pages582–587 (year2014).kimble2008quantum authorKimble, H. J. titleThe quantum internet. journalNature volume453, pages1023–1030 (year2008).acin2007entanglement authorAcín, A., authorCirac, J. I. & authorLewenstein, M. titleEntanglement percolation in quantum networks. journalNature Physics volume3, pages256–259 (year2007). This paper discovers several properties of entanglement-based complex quantum networks.faccin2013degree authorFaccin, M., authorJohnson, T., authorBiamonte, J., authorKais, S. & authorMigdał, P. titleDegree distribution in quantum walks on complex networks. journalPhysical Review X volume3, pages041007 (year2013). <http://link.aps.org/doi/10.1103/PhysRevX.3.041007>.QuantumPageRank authorPaparo, G. D. & authorMartin-Delgado, M. A. titleGoogle in a quantum network. journalSci. Rep. volume2, pages444 (year2012).garnerone2012pagerank authorGarnerone, S. titleThermodynamic formalism for dissipative quantum walks. journalPhysical Review A volume86, pages032342 (year2012). <http://link.aps.org/doi/10.1103/PhysRevA.86.032342>.paparo2014google authorPaparo, G., authorMüller, M., authorComellas, F. & authorMartin-Delgado, M. titleQuantum google algorithm. journalThe European Physical Journal Plus volume129 (year2014). <http://dx.doi.org/10.1140/epjp/i2014-14150-y>.sanchez2012quantum authorSánchez-Burillo, E., authorDuch, J., authorGómez-Gardeñes, J. & authorZueco, D. titleQuantum navigation and ranking in complex networks. journalScientific reports volume2 (year2012).lu2014chiral authorLu, D. et al. titleChiral Quantum Walks. journalArXiv e-prints(year2014). journalPhysical Review A volume93, pages0423902 (year2016). This paper experimentally realizes chiral quantum walks (walks that direct transport modulated time-symmetry breaking as proposed in <cit.>.faccin2014community authorFaccin, M., authorMigdał, P., authorJohnson, T. H., authorBergholm, V. & authorBiamonte, J. D. titleCommunity Detection in Quantum Complex Networks. journalPhysical Review X volume4, pages041012 (year2014). 1310.6638.WS98 authorWatts, D. J. & authorStrogatz, S. H. titleCollective dynamics of small-world networks. journalNature volume393, pages440–442 (year1998).barabasi1999emergence authorBarabási, A.-L. & authorAlbert, R. titleEmergence of scaling in random networks. journalscience volume286, pages509–512 (year1999).boccaletti2006complex authorBoccaletti, S., authorLatora, V., authorMoreno, Y., authorChavez, M. & authorHwang, D.-U. titleComplex networks: Structure and dynamics. journalPhysics reports volume424, pages175–308 (year2006).kivela2014multilayer authorKivelä, M. et al. titleMultilayer networks. journalJournal of complex networks volume2, pages203–271 (year2014).dedomenico2016physics authorDe Domenico, M., authorGranell, C., authorPorter, M. A. & authorArenas, A. titleThe physics of spreading processes in multilayer networks. journalNature Physics volume12, pages901–906 (year2016).guimera2005functional authorGuimera, R. & authorAmaral, L. A. N. titleFunctional cartography of complex metabolic networks. journalNature volume433, pages895–900 (year2005).palla2005uncovering authorPalla, G., authorDerényi, I., authorFarkas, I. & authorVicsek, T. titleUncovering the overlapping community structure of complex networks in nature and society. journalNature volume435, pages814–818 (year2005).song2005self authorSong, C., authorHavlin, S. & authorMakse, H. A. titleSelf-similarity of complex networks. journalNature volume433, pages392–395 (year2005).colizza2006detecting authorColizza, V., authorFlammini, A., authorSerrano, M. A. & authorVespignani, A. titleDetecting rich-club ordering in complex networks. journalNature physics volume2, pages110–115 (year2006).boguna2009navigability authorBoguna, M., authorKrioukov, D. & authorClaffy, K. C. titleNavigability of complex networks. journalNature Physics volume5, pages74–80 (year2009).vespignani2012modelling authorVespignani, A. titleModelling dynamical processes in complex socio-technical systems. journalNature Physics volume8, pages32–39 (year2012).liu2011controllability authorLiu, Y.-Y., authorSlotine, J.-J. & authorBarabási, A.-L. titleControllability of complex networks. journalNature volume473, pages167–173 (year2011).albert2000error authorAlbert, R., authorJeong, H. & authorBarabási, A.-L. titleError and attack tolerance of complex networks. journalNature volume406, pages378–382 (year2000).callaway2000network authorCallaway, D. S., authorNewman, M. E., authorStrogatz, S. H. & authorWatts, D. J. titleNetwork robustness and fragility: Percolation on random graphs. journalPhysical Review Letters volume85, pages5468 (year2000).buldyrev2010catastrophic authorBuldyrev, S. V., authorParshani, R., authorPaul, G., authorStanley, H. E. & authorHavlin, S. titleCatastrophic cascade of failures in interdependent networks. journalNature volume464, pages1025–1028 (year2010).gao2012networks authorGao, J., authorBuldyrev, S. V., authorStanley, H. E. & authorHavlin, S. titleNetworks formed from interdependent networks. journalNature physics volume8, pages40–48 (year2012).radicchi2013abrupt authorRadicchi, F. & authorArenas, A. titleAbrupt transition in the structural formation of interconnected networks. journalNature Physics volume9, pages717–720 (year2013).dedomenico2014navigability authorDe Domenico, M., authorSolé-Ribalta, A., authorGómez, S. & authorArenas, A. titleNavigability of interconnected networks under random failures. journalProceedings of the National Academy of Sciences volume111, pages8351–8356 (year2014).dedomenico2016spectral authorDe Domenico, M. & authorBiamonte, J. titleSpectral entropies as information-theoretic tools for complex network comparison. journalPhysical Review X volume6, pages041062 (year2016). This paper develops an information theory approach to study complex networks and builds on spectral methods found in quantum statistical mechanics.cuquet2009entanglement authorCuquet, M. & authorCalsamiglia, J. titleEntanglement percolation in quantum complex networks. journalPhysical Review Letters volume103, pages240503 (year2009).Perseguers2010 authorPerseguers, S., authorLewenstein, M., authorAcín, A. & authorCirac, J. titleQuantum random networks. journalNature Physics volume6, pages539–543 (year2010).cirac1997quantum authorCirac, J. I., authorZoller, P., authorKimble, H. J. & authorMabuchi, H. titleQuantum state transfer and entanglement distribution among distant nodes in a quantum network. journalPhysical Review Letters volume78, pages3221 (year1997).chaneliere2005storage authorChaneliere, T. et al. titleStorage and retrieval of single photons transmitted between remote quantum memories. journalNature volume438, pages833–836 (year2005).wilk2007single authorWilk, T., authorWebster, S. C., authorKuhn, A. & authorRempe, G. titleSingle-atom single-photon quantum interface. journalScience volume317, pages488–490 (year2007).politi2008silica authorPoliti, A., authorCryan, M. J., authorRarity, J. G., authorYu, S. & authorO'Brien, J. L. titleSilica-on-silicon waveguide quantum circuits. journalScience volume320, pages646–649 (year2008).ritter2012elementary authorRitter, S. et al. titleAn elementary quantum network of single atoms in optical cavities. journalNature volume484, pages195–200 (year2012).aspuru2012photonic authorAspuru-Guzik, A. & authorWalther, P. titlePhotonic quantum simulators. journalNature Physics volume8, pages285–291 (year2012).dedomenico2015structural authorDe Domenico, M., authorNicosia, V., authorArenas, A. & authorLatora, V. titleStructural reducibility of multilayer networks. journalNature Communications volume6, pages6864 (year2015).schieber2016quantification authorSchieber, T. A. et al. titleQuantification of network structural dissimilarities. journalNature Communications volume8, pages13928 (year2017).ambjorn2004emergence authorAmbjørn, J., authorJurkiewicz, J. & authorLoll, R. titleEmergence of a 4d world from causal quantum gravity. journalPhysical Review Letters volume93, pages131301 (year2004).levin2005string authorLevin, M. A. & authorWen, X.-G. titleString-net condensation: A physical mechanism for topological phases. journalPhysical Review B volume71, pages045110 (year2005).konopka2008quantum authorKonopka, T., authorMarkopoulou, F. & authorSeverini, S. titleQuantum graphity: a model of emergent locality. journalPhysical Review D volume77, pages104029 (year2008).rovelli2010geometry authorRovelli, C. & authorSpeziale, S. titleGeometry of loop quantum gravity on a graph. journalPhysical Review D volume82, pages044018 (year2010).vinokur2008superinsulator authorVinokur, V. M. et al. titleSuperinsulator and quantum synchronization. journalNature volume452, pages613–615 (year2008).emerson2003pseudo authorEmerson, J., authorWeinstein, Y. S., authorSaraceno, M., authorLloyd, S. & authorCory, D. G. titlePseudo-random unitary operators for quantum information processing. journalScience volume302, pages2098–2100 (year2003).brown2010convergence authorBrown, W. G. & authorViola, L. titleConvergence rates for arbitrary statistical moments of random quantum circuits. journalPhysical Review Letters volume104, pages250501 (year2010).bianconi2001bose authorBianconi, G. & authorBarabási, A.-L. titleBose-einstein condensation in complex networks. journalPhysical Review Letters volume86, pages5632 (year2001). This paper discovers that certain complex networks models are related to quantum statistics.reichardt2004detecting authorReichardt, J. & authorBornholdt, S. titleDetecting fuzzy community structures in complex networks with a potts model. journalPhysical Review Letters volume93, pages218701 (year2004).garlaschelli2009bosefermi authorGarlaschelli, D. & authorLoffredo, M. I. titleGeneralized bose-fermi statistics and structural correlations in weighted networks. journalPhysical Review Letters volume102, pages038701 (year2009). <http://link.aps.org/doi/10.1103/PhysRevLett.102.038701>.dorogovtsev2008critical authorDorogovtsev, S. N., authorGoltsev, A. V. & authorMendes, J. F. titleCritical phenomena in complex networks. journalReviews of Modern Physics volume80, pages1275 (year2008).Newman03thestructure authorNewman, M. E. J. titleThe structure and function of complex networks. journalSIAM Review volume45, pages167–256 (year2003).greenberger1989going authorGreenberger, D. M., authorHorne, M. A. & authorZeilinger, A. titleGoing beyond bell?s theorem. In booktitleBell?s theorem, quantum theory and conceptions of the universe, pages69–72 (publisherSpringer, year1989).cardillo2013information authorCardillo, A., authorGalve, F., authorZueco, D. & authorGómez-Gardeñes, J. titleInformation sharing in quantum complex networks. journalPhysical Review A volume87, pages052312 (year2013).carvacho2017violation authorCarvacho, G. et al. titleExperimental violation of local causality in a quantum network. journalNature Communications volume8, pages14775 (year2017).ambainis2003quantum authorAmbainis, A. titleQuantum walks and their algorithmic applications. journalInternational Journal of Quantum Information volume1, pages507–518 (year2003).shenvi2003quantum authorShenvi, N., authorKempe, J. & authorWhaley, K. B. titleQuantum random-walk search algorithm. journalPhysical Review A volume67, pages052307 (year2003).rossi2015measuring authorRossi, L., authorTorsello, A. & authorHancock, E. R. titleMeasuring graph similarity through continuous-time quantum walks and the quantum jensen-shannon divergence. journalPhysical Review E volume91, pages022815 (year2015).2016arXiv160305423W authorWong, T. G. & authorMeyer, D. A. titleIrreconcilable Difference Between Quantum Walks and Adiabatic Quantum Computing. journalArXiv e-prints(year2016). 1603.05423.2017arXiv170104392C authorChakraborty, S., authorNovo, L., authorDi Giorgio, S. & authorOmar, Y. titleOptimal quantum spatial search on random temporal networks. journalArXiv e-prints(year2017). 1701.04392.QuantumWalks authorChilds, A., authorFarhi, E. & authorGutmann, S. titleAn example of the difference between quantum and classical random walks. journalQuantum Information Processing volume1, pages35–43 (year2002).Whitfield2010 authorWhitfield, J. D., authorRodríguez-Rosario, C. A. & authorAspuru-Guzik, A. titleQuantum stochastic walks: A generalization of classical random walks and quantum walks. journalPhysical Review A volume81, pages022323 (year2010). <http://arxiv.org/abs/0905.2942 http://link.aps.org/doi/10.1103/PhysRevA.81.022323>. 0905.2942.zimboras2013quantum authorZimboras, Z. et al. titleQuantum transport enhancement by time-reversal symmetry breaking. journalSci. Rep. volume3, pages2361 (year2013).2016PhRvL.116j0501C authorChakraborty, S., authorNovo, L., authorAmbainis, A. & authorOmar, Y. titleSpatial Search by Quantum Walk is Optimal for Almost all Graphs. journalPhysical Review Letters volume116, pages100501 (year2016). 1508.01327.2016arXiv161203281M authorMasuda, N., authorPorter, M. A. & authorLambiotte, R. titleRandom walks and diffusion on networks. journalArXiv e-prints(year2016). 1612.03281.noh2004random authorNoh, J. D. & authorRieger, H. titleRandom walks on complex networks. journalPhysical Review Letters volume92, pages118701 (year2004).burda2009localization authorBurda, Z., authorDuda, J., authorLuck, J. & authorWaclaw, B. titleLocalization of the maximal entropy random walk. journalPhysical Review Letters volume102, pages160602 (year2009).2016PhRvE..93b2304M authorMülken, O., authorDolgushev, M. & authorGaliceanu, M. titleComplex quantum networks: From universal breakdown to optimal transport. journalPhysical Review E volume93, pages022304 (year2016). 1511.00910.cameron2014universal authorCameron, S. et al. titleUniversal state transfer on graphs. journalLinear Algebra and its Applications volume455, pages115–142 (year2014).2016t160600992T authorTödtli, B. et al. titleContinuous-Time Quantum Walks on Directed Bipartite Graphs. journalArXiv e-prints(year2016). 1606.00992.baezbook authorBaez, J. C. & authorBiamonte, J. titleQuantum Techniques for Stochastic Mechanics. journalArXiv e-prints(year2012). 1209.3632.Note1 noteA dumping or teleportation factor is often included in the computation in order to assure the Perron-Frobenius theorem satisfactibility.Garnerone2012google authorGarnerone, S., authorZanardi, P. & authorLidar, D. A. titleAdiabatic quantum algorithm for search engine ranking. journalPhysical Review Lett. volume108, pages230506 (year2012).breuer2007theory authorBreuer, H. & authorPetruccione, F. titleThe Theory of Open Quantum Systems (publisherOUP Oxford, year2007). <https://books.google.com.mt/books?id=DkcJPwAACAAJ>.PhysRevA.91.042108 authorSinkovicz, P., authorKurucz, Z., authorKiss, T. & authorAsbóth, J. K. titleQuantized recurrence time in unital iterated open quantum dynamics. journalPhysical Review A volume91, pages042108 (year2015). <http://link.aps.org/doi/10.1103/PhysRevA.91.042108>.2014PhRvB..90l5138M authorManzano, D. & authorHurtado, P. I. titleSymmetry and the thermodynamics of currents in open quantum systems. journalPhysical Review B volume90, pages125138 (year2014). 1310.7370.szegedy2004quantum authorSzegedy, M. titleQuantum speed-up of markov chain based algorithms. In booktitleFoundations of Computer Science, 2004. Proceedings. 45th Annual IEEE Symposium on, pages32–41 (organizationIEEE, year2004).javarone2013quantum authorJavarone, M. A. & authorArmano, G. titleQuantum–classical transitions in complex networks. journalJournal of Statistical Mechanics volume2013, pagesP04019 (year2013).bianconi2016network authorBianconi, G. & authorRahmede, C. titleNetwork geometry with flavor: From complexity to quantum geometry. journalPhysical Review E volume93, pages032315 (year2016).bianconi2015interdisciplinary authorBianconi, G. titleInterdisciplinary and physics challenges of network theory. journalEPL volume111, pages56001 (year2015).pincus1991approximate authorPincus, S. M. titleApproximate entropy as a measure of system complexity. journalProceedings of the National Academy of Sciences volume88, pages2297–2301 (year1991).costa2002multiscale authorCosta, M., authorGoldberger, A. L. & authorPeng, C.-K. titleMultiscale entropy analysis of complex physiologic time series. journalPhysical Review Letters volume89, pages068102 (year2002).braunstein2006laplacian authorBraunstein, S. L., authorGhosh, S. & authorSeverini, S. titleThe laplacian of a graph as a density matrix: a basic combinatorial approach to separability of mixed states. journalAnnals of Combinatorics volume10, pages291–317 (year2006).anand2011shannon authorAnand, K., authorBianconi, G. & authorSeverini, S. titleShannon and von neumann entropy of random networks with heterogeneous expected degree. journalPhysical Review E volume83, pages036109 (year2011).dedomenico2013mathematical authorDe Domenico, M. et al. titleMathematical formulation of multilayer networks. journalPhysical Review X volume3, pages041022 (year2013).boccaletti2014structure authorBoccaletti, S. et al. titleThe structure and dynamics of multilayer networks. journalPhysics Reports volume544, pages1–122 (year2014).fanuel2016magnetic authorFanuel, M., authorAlaiz, C. & authorSuykens, J. titleMagnetic eigenmaps for community detection in directed networks. journalPhysical Review E volume95, pages022302 (year2016).fanuel2016visualization authorFanuel, M., authorAlaíz, C. M., authorÁngela Fernández & authorSuykens, J. A. titleMagnetic eigenmaps for the visualization of directed networks. journalApplied and Computational Harmonic Analysis pages– (year2017). <http://www.sciencedirect.com/science/article/pii/S1063520317300052>.bentley2016multilayer authorBentley, B. et al. titleThe multilayer connectome of caenorhabditis elegans. journalPLOS Computational Biology volume12, pagese1005283 (year2016).de2016mapping authorDe Domenico, M., authorSasai, S. & authorArenas, A. titleMapping multiplex hubs in human functional brain networks. journalFrontiers in Neuroscience volume10, pages326 (year2016).ding2014dynamics authorDing, T. & authorSchloss, P. D. titleDynamics and associations of microbial community types across the human body. journalNature volume509, pages357 (year2014).albert2002statistical authorAlbert, R. & authorBarabási, A.-L. titleStatistical mechanics of complex networks. journalRev. Mod. Phys. volume74, pages47–97 (year2002).caldarelli2002fitness authorCaldarelli, G., authorCapocci, A., authorDe Los Rios, P. & authorMuñoz, M. A. titleScale-free networks from varying vertex intrinsic fitness. journalPhysical Review Lett. volume89, pages258702 (year2002). <http://link.aps.org/doi/10.1103/PhysRevLett.89.258702>.newman2012communities authorNewman, M. E. titleCommunities, modules and large-scale structure in networks. journalNature Physics volume8, pages25–31 (year2012).fortunato2010community authorFortunato, S. titleCommunity detection in graphs. journalPhysics Reports volume486, pages75–174 (year2010).horn2001clustering authorHorn, D. titleClustering via hilbert space. journalPhysica A: Statistical Mechanics and its Applications volume302, pages70 – 79 (year2001). <http://www.sciencedirect.com/science/article/pii/S0378437101004423>. noteProc. Int. Workshop on Frontiers in the Physics of Complex Systems.weinstein2009dynamic authorWeinstein, M. & authorHorn, D. titleDynamic quantum clustering: A method for visual exploration of structures in data. journalPhysical Review E volume80, pages066117 (year2009). <http://link.aps.org/doi/10.1103/PhysRevE.80.066117>.wittek2013quantumclustering authorWittek, P. titleHigh-performance dynamic quantum clustering on graphics processors. journalJournal of Computational Physics volume233, pages262 – 271 (year2013). <http://www.sciencedirect.com/science/article/pii/S0021999112005165>.tsomokos2011 authorTsomokos, D. I. titleQuantum walks on complex networks with connection instabilities and community structure. journalPhysical Review A volume83, pages052315 (year2011). <http://link.aps.org/doi/10.1103/PhysRevA.83.052315>.bennett2000quantum authorBennett, C. H. & authorDiVincenzo, D. P. titleQuantum information and computation. journalNature volume404, pages247–255 (year2000).markov2014limits authorMarkov, I. L. titleLimits on fundamental limits to computation. journalNature volume512, pages147–154 (year2014).scheffer2012anticipating authorScheffer, M. et al. titleAnticipating critical transitions. journalScience volume338, pages344–348 (year2012).gao2016universal authorGao, J., authorBarzel, B. & authorBarabási, A.-L. titleUniversal resilience patterns in complex networks. journalNature volume530, pages307–312 (year2016).motter2004cascade authorMotter, A. E. titleCascade control and defense in complex networks. journalPhysical Review Letters volume93, pages098701 (year2004).
http://arxiv.org/abs/1702.08459v4
{ "authors": [ "Jacob Biamonte", "Mauro Faccin", "Manlio De Domenico" ], "categories": [ "quant-ph", "cond-mat.dis-nn", "cs.CY", "cs.SI", "physics.soc-ph" ], "primary_category": "quant-ph", "published": "20170227190003", "title": "Complex Networks from Classical to Quantum" }
LaTeX2e AJ ApJ ApJL Astroph.Sp.Sci. A&A A&A Review ARAA MNRAS PASJ NatureNew Astron. JCAP Phys. Rev. D erg s^-1 cm^-2 erg s^-1 #1 #1 #1eq. (<ref>)#1Eq. (<ref>)#1 (<ref>)#1#2(#1/#2)#1⟨#1⟩#1LIGO binary BH mergers progenitors]Progenitors of binary black hole mergers detected by LIGOKonstantin Postnov & Alexander Kuranov]Konstantin Postnov Alexander KuranovSternberg Astronomical Institute, Moscow M.V.Lomonosov State University 13, Universitetskij pr., 119234 Moscow, Russia email:pk@sai.msu.ru2017329 The lives and death-throes of massive starsJ.J. Eldridge, ed.[ [===== Possible formation mechanisms of massive close binary black holes that can merge in the Hubble time to produce powerful gravitational wave burstsdetected during advanced LIGO O1 science run are briefly discussed. The pathways include the evolution from field low-metallicity massive binaries,the dynamical formation in globular clusters and primordial black holes. Low effective black hole spins inferred for LIGO GW150914 and LTV151012events are discussed. Population synthesis calculations of the expected spin and chirp mass distributions from the standard field massive binary formation channel are presented for different metallicities (from zero-metal Population III stars up to solar metal abundance). We conclude that that merging binary black holes can contain systems from different formation channels, discrimination between which can be made with increasing statistics of mass and spin measurements from ongoing and future gravitational wave observations. § INTRODUCTION AND HISTORICAL REMARKSThe epochal discovery of the first gravitational wave source GW150914 from coalescing binary black hole (BH) system<cit.> not only heralded the beginning of gravitational wave astronomy era, but also stimulated a wealth of works onfundamental physical and astrophysical aspects of the formation and evolution of binary BHs. The LIGO detection ofGW150914and of the second robust binary BH merging event GW151226 <cit.> enables BH masses and spins before the merging,the luminosity distance to the sources and the binary BH merging rate in the Universe to be estimated <cit.>.Astrophysical implications of these measurements were discussed, e.g., in <cit.>. This discovery of gravitational waves from coalescing binary BHs was long awaited.Evolution of massive binary systems was elaborated in the 1970s to explain a rich variety of newly discovered galacticX-ray binaries <cit.>. Formation of two relativistic compact remnants (neutron stars (NSs) or black holes) naturally followed from the binary evolution scenario <cit.>.At the dawn of the LIGO Project, Tutukov and Yungelson<cit.> calculated, using the standard assumptions of massive binary evolution, theexpected galactic merging rate of binary NSs and BHs. They pointed out that although the galactic merging rate ofbinary NSs is much larger than that of binary BHs, their detection rates by gravitational-wave interferometerscan be comparable due to the strongdependence of the characteristic GW amplitude h_c on the total mass M=M_1+M_2 of the coalescing binaries, h_c∼ M^5/2.A few years later, independent population synthesis calculations by the Scenario Machine code were reported in a series of papers<cit.>. They showed that in a wide range of possible BH formation parameters(masses,kick velocities) and under standard assumptions of the massive star evolution, the detection rate of binary BH mergings should be much higher than that of binary NSs, and the first LIGO event should most likely to be a binary BH merging. Interestingly,the mean BH masses known at that time from dynamical measurements in galactic BH X-ray binaries were about10 M_⊙, which forced (cautiously) the authors of<cit.> to fix the parameter k_BH=M_BH/M_c, where M_c is the mass of the star before the collapse, around ∼ 0.3 (see Fig. 4 in that paper) in order to produce the chirp mass of coalescing binary BHs around 15 M_⊙. Taking k_BH=1, one immediately obtains the BH masses around 30-40 M_⊙, which seemed outrageously high at that time. Starting from the end of the 1990s, various groups have used different population synthesis codes to calculate the merging rates of double compact objects(see especially many papers by the Polish group based on the StarTrack code <cit.>), yielding a wide range of possibleBH-BH merging rates (see e.g. Table 6in <cit.>). Clearly, the degeneracy of binary evolution and BH formation parameters has been so high <cit.> that only real observations could narrow the wide parameter range. § STANDARD SCENARIO OF BINARY BH FORMATIONThe standard scenario of double BH formation from field stars is based on well-recognized evolution of single massive stars <cit.>.To produce a massive BH with M≃ 10M_⊙ in the end of evolution, the progenitor star should have a large mass and low mass-loss rate.The mass-loss rate is strongly dependent on the metallicity, which plays the key role in determining the final mass ofstellar remnant (see <cit.> and N. Yusof's contribution in this conference). The metallicity effects were included in the population synthesis calculations <cit.>, and the most massive BHs were found to be produced by the low-metallicity progenitors. Here early metal-free Population III stars provide an extreme example, see calculations by <cit.>. After the discovery of GW150914,several independent population synthesis calculations were performed to explain the observed masses of binary BH in GW150914 and the inferred binary BHmerging rate ∼ 9-240 Gpc^-3yr^-1<cit.><cit.>. In addition to the metallicity that affects the intrinsic evolution of the binary components,the most important uncertainty in the binary evolution is the efficiency of the common envelope (CE) stage whichis required to form a compact double BH binary merging in the Hubble time. The common envelope stage remains a highly debatable issue.For example, in recent hydro simulations <cit.> a low CE efficiency was found, while successful CE calculationswere reported by other groups (see, e.g., N. Ivanova contribution at this conference). Another recent study <cit.> argues thatit is possible to reconcile the BH formation rate through the CE channel taking into account the stability of mass transfer in massive binariesin the Hertzsprung gap stage, whichdrastically reduces the otherwise predicted overproduction of binary BH merging rate in some population synthesis calculations. Also, the so-called stable 'isotropic re-emission' mass transfer mode can be realized in high-mass X-ray binaries with massive BHs, thus helping to avoidthe merging of the binary system components in the common envelope <cit.>. This stable mass transfer modecan explain the surprising stability of kinematic characteristics observed in the galactic microquasar SS433 <cit.>.Of course, much more empirical constraints on and hydro simulations of the common evolution formation and properties are required, but the formationchannel with common envelope of binary BHs with properties similar to GW150914remains quite plausible. § OTHER SCENARIOS To avoid the ill-understood common envelope stage, several alternative scenarios of binary BH formation from massive stars were proposed.For example, in short-periodmassive binary systems chemically homogeneous evolution due to rotational mixing can be realized. The stars remain compact until the core collapse, and close binary BH system is formed without common envelope stage <cit.>. In this scenario, a pair of nearly equal massive BHs can be formed with the merging rate comparable to the empirically inferred from the first LIGO observations. Another possible way to form massive binary BH system is through dynamical interactions in a dense stellar systems (globular clusters). Thisscenario was earlier considered by <cit.>. In the core of a dense globular clusters, stellar-mass BH form multiple systems, andBH binaries are dynamically ejected from the cluster.This mechanism was shown to be quite efficient in producing 30+30 M_⊙ merging binary BHs <cit.>, andbinary BH formed in this way can provide a substantial fraction of all binary BH mergings in the local Universe <cit.>. Finally, there can be more exotic channels of binary BH formation. For example, primordial black holes (PBHs)formed in the early Universe can form pairs which could beefficient sources of gravitational waves <cit.>. After the discovery of GW150914, the interest to binary PBHs has renewed <cit.>. Stellar-mass PBHs can form a substantial part of dark matter in the Universe <cit.>. The PBHs formed at the radiation-dominated stage can form pairs like GW150914 with the merging rate compatible with empirical LIGO results, being only a small fraction of alldark matter <cit.>. Different class of PBHs with a universal log-normal mass spectrum produced in the frame of a modified Affleck-Dine supersymmetric baryogenesis <cit.> were shown to be able to match the observed properties of GW150914 <cit.>.§ LOW SPINS OF BH IN GW150914 AND LTV151012 EVENTS In the framework of general relativity, a BH is fully characterized by its mass M and dimensionlessangular momentum a=J/M (in geometrical units G=c=1) (the possible BH electric charge is negligible in real astrophysical conditions).The LIGO observations enable measurements of both masses of the coalescing BH components, M_1 and M_2, and the chirp mass that determines the strength of gravitational wave signal M=(M_1M_2)^3/5/M^1/5. From the analysis of waveforms at the inspiral stage, individual BH spins before the merging are poorly constrained, but their mass-weighted total angular momentum parallel to the orbital angular momentum, χ_eff, can be estimated with good accuracy<cit.>[The parameter χ_eff=(M_1χ_1+M_2χ_2)/M, where χ_i=a_icosθ_i with θ_i being the angle between the angular momentum of the i-th BH and orbital angular momentum of the binary system.]. The O1 LIGO detections suggest that the most massive GW150914 and (less certain) LTV151012 have very low χ_eff≃ 0.This observational fact has important evolutionary implications (see <cit.>). It suggests a very slow rotation of BH progenitors, which by itself strongly constraints, for example, chemically homogeneous pathways mentioned above in which tidally induced rotation of the close binary components plays the key role. Massive stars are observed to be rapid rotators. No significant angular momentum loss is expected during their evolution with low mass loss rate by stellar wind and at the pre-collapse stage as required to produce massive BHs <cit.>.Note that low effective spin values can imply either small intrinsic BH spins a∼ 0, or unusual orientations of BH spins with respect to theorbital angular momentum at the inspiral stage. The last case can well be reconciled with the dynamical formation scenario <cit.>, where the BH spins are not expected to be correlated with the orbital angular momentum. In the PBH scenario, BH spins must be zero as there are no vorticityin primordial cosmological perturbations.Therefore, the mass-spin distribution of BHs can serve as a sensitive tool to discriminate between differentastrophysical formation channels of coalescing massive binary BHs. To estimate the spin distribution of BH remnants in binaries,it is necessary to know how to treat the spin evolution of the stellar core, which is ill-understood and strongly model-dependent.One possible approach is to match theoretical predictions of the core rotation with observed period distribution of the young neutron stars observed as radio pulsars <cit.>. Initially, a star is assumed to rotate rigidly, but after the main sequence the star can be separated in two parts – the core and the envelope, with some effective coupling between these two parts. The coupling between the core and envelope rotation can be mediated by magnetic forces, internal gravity waves (see <cit.> and J.Fuller's talk at this conference), etc. The validity of such an approach was checked by direct MESA calculations of the rotational evolution of a 15 M_⊙ star <cit.>. It was found that the observed period distribution of young pulsars can be reproduced if the effective coupling time between the core and envelope is τ_c=5× 10^5 years (see Fig. 1 in <cit.>). Below we shall assume that this time is also applicable to the evolution of very massive stars leaving behind BH remnants.Each angular momentum of the main-sequence components of the initial binary is assumed to be arbitrarily distributed in space, its absolutevalue being connected to the initial stellar mass using the empirical relation between the equatorial rotation velocity of a star with its mass v_rot=330M_0^3.3/(15+M_0^3.45) km s^-1 (here M_0 is in solar units). It was assumed that the rotation of the stellar envelope gets synchronized with the orbital motion with the characteristic synchronization time t_sync, and the process of tidal synchronization was treated as in the BSE code <cit.>. Due to intrinsic misalignment of the spin vectors of the stars with the binary orbital angular momentum L̂, we separately treated the core-envelope coupling for the spin components parallel and perpendicular to L̂. On evolutionary stages prior to the compact remnant formation, for each binary component we assumed that due to tidal interactions the parallel component of the stellar envelope spin J_|| gets synchronized with the orbital motion onthe characteristic time t_sync, while the normal spin component J_⊥ of the stellar envelope decreases due to the tidal interaction in the binary system the onthe same characteristic time scale, which leads to the secular evolution of the spin-orbit misalignment. The parallel component of theenvelope spin also evolves due to the core-envelope interaction with the characteristic time τ_c. These processes were added tothe updated BSE population synthesis code. With these additions, the population synthesis of typically 100000 binaries per run has been carried out for different parameters of binary evolution (the common envelope stage efficiency α_CE, stellar metallicities etc.).No generic BH kick was assumed. The results of calculations of BH spin distributions for different stellar metallicites and for the standard CE efficiency parameter α_CE=1 are shown in Fig.<ref>. Evolution of zero-metallicity (primordial Population III) stars was parametrized as in <cit.>.Fig. <ref> shows the plot of coalescing binary BHs on the M-χ_eff plane for different metallicities. BH spin misalignments with orbital angular momentum in coalescing binary BHs for different stellar metallicities are presented in Fig. <ref>. A detailed analysis of these simulations will be published elsewhere (Postnov & Kuranov, in preparation), but the main conclusionscan be drawn from Figs. 1-3. It is seen (expectedly) from Fig. 1 that the effective spin χ_eff of binary BH from field massive stars (the standard formation scenario) is distributed in a wide range, but the M-χ_eff plot (Fig. 2) suggests that large chirp massescan hardly have χ_eff≃ 0. This (model) result can signal potential difficulty in explaining the most massive merging BH binaries by this formation channel only. Fig. 3 suggests that even in the absence of BH kicks, which were assumed in the present calculations, the BH spin misalignments can be quite high even for field binaries.§ CONCLUSIONS Presently, there are different astrophysical pathways of producing massive binary BHs that merge in the Hubble time. They can be formed from low-metallicity massive field stars, primordial Pop III remnants, can be results of dynamical evolution in densestellar clusters or even primordial black holes. It is not excluded that all channels contribute to the observedbinary BH population. For example, the discovery of very massive Schwarzschild BHs would be difficult to reconcile with the standard massive binary evolution, but can be naturally explained in the PBH scenario <cit.>. As of the time of writing,another two event candidates were reported by the LIGO collaboration from the analysis of 12 days of joint operation of two LIGO interferometers during O2 run (see). With the current LIGO sensitivity, the detection horizon of binary BH with masses around 30 M_⊙reaches 700 Mpc. So far the statistics of binary BH merging rate as a function of BH mass as inferred from three reported LIGO O1 events is consistent with a power-law dependence, dR/dM∼ M^-2.5<cit.>, which does not contradict the general power-law behavior of the stellar mass function. Clearly, more statistics of BH masses and spins inferred frombinary BH mergings is required to distinguish between the possible binary BH populations which can exist in the Universe. Acknowledgements. KP acknowledges the support from RSF grant 16-12-10519.50[Abadie et al.(2010)]2010CQGra..27q3001AAbadie, J. et al. 2010, Classical and Quantum Gravity, 27, 173001[Abbott et al.(2016a)]2016ApJ...818L..22AAbbott, B.P. et al. 2016a, , 818, L22[Abbott et al.(2016b)]2016PhRvX...6d1015AAbbott, B.P. et al. 2016b, Physical Review X, 6, 041015[Abbott et al.(2016c)]2016PhRvL.116x1103AAbbott, B.P. et al. 2016c, Physical Review Letters, 116, 241103[Abbott et al.(2016d)]LIGO-PRLAbbott, B.P. et al. 2016d, Physical Review Letters, 116, 061102[Abbott et al.(2016e)]2016ApJ...833L...1AAbbott, B.P. et al. 2016e, , 833, L1[Belczynski et al.(2016)Belczynski, Holz, Bulik, &O'Shaughnessy]2016Natur.534..512BBelczynski, K., Holz, D.E., Bulik, T., &O'Shaughnessy, R. 2016, , 534, 512[Belczynski et al.(2002)Belczynski, Kalogera, &Bulik]2002ApJ...572..407BBelczynski, K., Kalogera, V., &Bulik, T. 2002, , 572, 407[Bird et al.(2016)]2016PhRvL.116t1301BBird, S. et al. 2016, Physical Review Letters, 116, 201301[Blinnikov et al.(2016)Blinnikov, Dolgov, Porayko, &Postnov]2016JCAP...11..036BBlinnikov, S., Dolgov, A., Porayko, N.K., &Postnov, K. 2016, , 11, 036[Carr et al.(2016)Carr, Kühnel, &Sandstad]2016PhRvD..94h3504CCarr, B., Kühnel, F., &Sandstad, M. 2016, , 94, 083504[Davydov et al.(2008)Davydov, Esipov, &Cherepashchuk]2008ARep...52..487DDavydov, V.V., Esipov, V.F., &Cherepashchuk, A.M. 2008, Astronomy Reports, 52, 487[de Mink&Mandel(2016)]2016MNRAS.460.3545Dde Mink, S.E. &Mandel, I. 2016, , 460, 3545[Dolgov&Silk(1993)]ad-jsDolgov, A. &Silk, J. 1993, , 47, 4244[Dolgov et al.(2009)Dolgov, Kawasaki, &Kevlishvili]ad-mk-nkDolgov, A.D., Kawasaki, M., &Kevlishvili, N. 2009, Nuclear Physics B, 807, 229[Dominik et al.(2012)]2012ApJ...759...52DDominik, M. et al. 2012, , 759, 52[Dominik et al.(2013)]2013ApJ...779...72DDominik, M. et al. 2013, , 779, 72[Eldridge&Stanway(2016)]2016MNRAS.462.3302EEldridge, J.J. &Stanway, E.R. 2016, , 462, 3302[Eroshenko(2016)]2016arXiv160404932EEroshenko, Y.N. 2016, ArXiv e-prints[Flannery&van den Heuvel(1975)]1975A A....39...61FFlannery, B.P. &van den Heuvel, E.P.J. 1975, , 39, 61[Fuller et al.(2015)Fuller, Cantiello, Lecoanet, &Quataert]2015ApJ...810..101FFuller, J., Cantiello, M., Lecoanet, D., &Quataert, E. 2015, , 810, 101[Hartwig et al.(2016)]2016MNRAS.460L..74HHartwig, T. et al. 2016, , 460, L74[Hotokezaka&Piran(2017)]2017arXiv170203952HHotokezaka, K. &Piran, T. 2017, ArXiv e-prints[Hurley et al.(2002)Hurley, Tout, &Pols]2002MNRAS.329..897HHurley, J.R., Tout, C.A., &Pols, O.R. 2002, , 329, 897[Kinugawa et al.(2014)Kinugawa, Inayoshi, Hotokezaka, Nakauchi, &Nakamura]Kinugawa2014Kinugawa, T., Inayoshi, K., Hotokezaka, K., Nakauchi, D., &Nakamura, T. 2014, , 442, 2963[Kushnir et al.(2016)Kushnir, Zaldarriaga, Kollmeier, &Waldman]2016MNRAS.462..844KKushnir, D., Zaldarriaga, M., Kollmeier, J.A., &Waldman, R. 2016, , 462, 844[Lipunov et al.(1997a)Lipunov, Postnov, &Prokhorov]1997AstL...23..492LLipunov, V.M., Postnov, K.A., &Prokhorov, M.E. 1997a, Astronomy Letters, 23, 492[Lipunov et al.(1997b)Lipunov, Postnov, &Prokhorov]1997NewA....2...43LLipunov, V.M., Postnov, K.A., &Prokhorov, M.E. 1997b, , 2, 43[Lipunov et al.(1997c)Lipunov, Postnov, &Prokhorov]1997MNRAS.288..245LLipunov, V.M., Postnov, K.A., &Prokhorov, M.E. 1997c, , 288, 245[Lipunov et al.(2017)]2017NewA...51..122LLipunov, V.M. et al. 2017, , 51, 122[Mandel&de Mink(2016)]2016MNRAS.458.2634MMandel, I. &de Mink, S.E. 2016, , 458, 2634[Marchant et al.(2016)Marchant, Langer, Podsiadlowski, Tauris, &Moriya]2016A A...588A..50MMarchant, P., Langer, N., Podsiadlowski, P., Tauris, T.M., &Moriya, T.J. 2016, , 588, A50[Nakamura et al.(1997)Nakamura, Sasaki, Tanaka, &Thorne]1997ApJ...487L.139NNakamura, T., Sasaki, M., Tanaka, T., &Thorne, K.S. 1997, , 487, L139[Ohlmann et al.(2016)Ohlmann, Röpke, Pakmor, &Springel]2016ApJ...816L...9OOhlmann, S.T., Röpke, F.K., Pakmor, R., &Springel, V. 2016, , 816, L9[Pavlovskii et al.(2017)Pavlovskii, Ivanova, Belczynski, &Van]2017MNRAS.465.2092PPavlovskii, K., Ivanova, N., Belczynski, K., &Van, K.X. 2017, , 465, 2092[Postnov et al.(2016)Postnov, Kuranov, Kolesnikov, Popov, &Porayko]2016MNRAS.463.1642PPostnov, K.A., Kuranov, A.G., Kolesnikov, D.A., Popov, S.B., &Porayko, N.K. 2016, , 463, 1642[Postnov&Yungelson(2014)]2014LRR....17....3PPostnov, K.A. &Yungelson, L.R. 2014, Living Reviews in Relativity, 17, 3[Rodriguez et al.(2016a)Rodriguez, Chatterjee, &Rasio]2016PhRvD..93h4029RRodriguez, C.L., Chatterjee, S., &Rasio, F.A. 2016a, , 93, 084029[Rodriguez et al.(2016b)Rodriguez, Haster, Chatterjee, Kalogera, &Rasio]2016ApJ...824L...8RRodriguez, C.L., Haster, C.J., Chatterjee, S., Kalogera, V., &Rasio, F.A. 2016b, , 824, L8[Sasaki et al.(2016)Sasaki, Suyama, Tanaka, &Yokoyama]2016PhRvL.117f1101SSasaki, M., Suyama, T., Tanaka, T., &Yokoyama, S. 2016, Physical Review Letters, 117, 061101[Sigurdsson&Hernquist(1993)]1993Natur.364..423SSigurdsson, S. &Hernquist, L. 1993, , 364, 423[Spera et al.(2015)Spera, Mapelli, &Bressan]2015MNRAS.451.4086SSpera, M., Mapelli, M., &Bressan, A. 2015, , 451, 4086[The LIGO Scientific Collaboration et al.(2016)]2016arXiv160203842TThe LIGO Scientific Collaboration et al. 2016, ArXiv e-prints[Tutukov&Yungelson(1973)]1973NInfo..27...70TTutukov, A. &Yungelson, L. 1973, Nauchnye Informatsii, 27, 70[Tutukov et al.(1973)Tutukov, Yungelson, &Klayman]1973NInfo..27....3TTutukov, A., Yungelson, L., &Klayman, A. 1973, Nauchnye Informatsii, 27, 3[Tutukov&Yungelson(1993)]1993MNRAS.260..675TTutukov, A.V. &Yungelson, L.R. 1993, , 260, 675[van den Heuvel&Heise(1972)]1972NPhS..239...67Vvan den Heuvel, E.P.J. &Heise, J. 1972, Nature Physical Science, 239, 67[van den Heuvel et al.(2017)van den Heuvel, Portegies Zwart, &de Mink]2017arXiv170102355Vvan den Heuvel, E.P.J., Portegies Zwart, S.F., &de Mink, S.E. 2017, ArXiv e-prints[Woosley et al.(2002)Woosley, Heger, &Weaver]2002RvMP...74.1015WWoosley, S.E., Heger, A., &Weaver, T.A. 2002, Reviews of Modern Physics, 74, 1015
http://arxiv.org/abs/1702.08056v1
{ "authors": [ "Konstantin Postnov", "Alexandre Kuranov" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170226172647", "title": "Progenitors of binary black hole mergers detected by LIGO" }
takehisa.hasegawa.sci@vc.ibaraki.ac.jpDepartment of Mathematics and Informatics,Ibaraki University,2-1-1, Bunkyo, Mito, 310-8512, Japannemoto@statphys.sci.hokudai.ac.jpDepartment of Physics, Hokkaido University, Kita 10 Nishi 8, Kita-ku, Sapporo, Hokkaido, 060-0810, JapanThis study focuses on investigating the manner in which a prompt quarantine measure suppresses epidemics in networks. A simple and ideal quarantine measure is considered in which an individual is detected with a probability immediately after it becomes infected and the detected one and its neighbors are promptly isolated. The efficiency of this quarantine in suppressing a susceptible-infected-removed (SIR) model is tested in random graphs and uncorrelated scale-free networks. Monte Carlo simulations are used to show that the prompt quarantine measure outperforms random and acquaintance preventive vaccination schemes in terms of reducing the number of infected individuals. The epidemic threshold for the SIR model is analytically derived under the quarantine measure, and the theoretical findings indicate that prompt executions of quarantines are highly effective in containing epidemics. Even if infected individuals are detected with a very low probability, the SIR model under a prompt quarantine measure has finite epidemic thresholds in fat-tailed scale-free networks in which an infected individual can always cause an outbreak of a finite relative size without any measure. The numerical simulations also demonstrate that the present quarantine measure is effective in suppressing epidemics in real networks.89.75.Hc,87.23.Ge,05.70.Fh,64.60.aq Efficiency of prompt quarantine measures on a susceptible-infected-removed model in networks Koji Nemoto December 30, 2023 ============================================================================================§ INTRODUCTION Recently, several studies were devoted to examining the spread of epidemics on networks in which nodes represent individuals and edges represent their social or sexual relationships through which an infectious disease spreads (as shown in review <cit.> and references therein). Theoretical studies for epidemiological models demonstrated that infectious diseases could spread very easily in highly heterogeneous networks <cit.>. Specifically, two fundamental epidemic models, namely the susceptible-infected-removed (SIR) model <cit.> and the susceptible-infected-susceptible model <cit.>, exhibit outbreaks of finite relative sizes with an infinitesimal infection rate if the underlying network is fat-tailed scale-free such that the degree distribution obeys p_k ∝ k^-γ, with γ≤ 3 <cit.>.In order to contain epidemics, several control measures were proposed that utilize network information. Epidemics can be suppressed by effective vaccination schemes such as the target vaccination <cit.>, the acquaintance vaccination <cit.>, the PageRank-based vaccination <cit.>, and the graph partitioning vaccination <cit.>. Theoretically, the above vaccination schemes succeed in containing epidemics in which a network is highly heterogeneous although these vaccination schemes are considered as a preventive measure wherein it is necessary to complete vaccinations prior to the appearance of an infectious disease in a network.With respect to postoutbreak strategies, previous studies examined local control measures in which susceptible individuals who were in contact with an infected individual are vaccinated or isolated <cit.>.Dybiec et al. <cit.> considered a spatial epidemic model in a situation in which individuals can be infectious prior to exhibiting symptoms (and therefore prior to detection), and a local control measure is probabilistically applied in a neighborhood centered around a detected infectious individual. The results indicated the optimal radius necessary for the aforementioned type of a control neighborhood to contain epidemics in terms of economic costs associated with disease and treatment.Takeuchi and Yamamoto <cit.> studied a ring vaccination in which susceptible individuals who came in contact with infected ones were probabilistically vaccinated. The findings revealed that the ring vaccination scheme reducedthe infection rate and the number of vaccinated nodes becomes considerably small when compared to those in the preventive strategies. However, the basic reproduction number (and thus the epidemic threshold) remained equal to those of random preventive vaccination, and this failed to contain epidemics in a highly heterogeneous network unless almost all individuals were vaccinated.There are also studies investigating dynamic reactions of individuals to the spread of epidemics <cit.>, such as behavioral responses of individuals by reducing their contact rates <cit.>, based on the number of infected neighbors or by rewiring connections (i.e., disconnecting their connections to infected neighbors and reconnecting others) <cit.>.In order to clarify the extent to which an ideal quarantine measure suppresses epidemics, the present study considers a simple case in which an individual is detected with a probability immediately after it becomes infected and the detected one and its neighbors are promptly quarantined. The efficiency of the prompt quarantine measure is numerically and analytically investigated to suppress SIR epidemics in typical networks in terms of the mean outbreak size, the epidemic threshold, and the occurrence probability of global outbreaks. The prompt quarantine measure is highly effective in containing epidemics, and it can theoretically eradicate epidemics in highly heterogeneous networks even when infected individuals are detected with a very small probability. The numerical simulations also indicate that the quarantine measure is effective in real networks.§ MODEL A discrete-time SIR model in a network is considered. For a given network with N nodes, each node corresponds to one of the following three states: susceptible (S), infected (I), or removed (R). Any S node can be infected by contact with adjacent I nodes. An I node infects each of its S neighbors independently with probability T and then spontaneously becomes R. A node that changes to the state R loses its capability to infect other nodes and does not change its state any further. The dynamics of the whole system is as follows:(1)Randomly select a node as a seed. As an initial configuration, all nodes except the seed are set to S, and the seed is set to I.(2)Randomly select an I node i. Compile a new list of the S neighbors of node i. Randomly select a node from the list and change its state from S to I with probability T. Repeat this procedure until the list is empty, and then change the state of node i to R. (3) Continue step (2) until I nodes cease to exist. That is, each node belongs to either S or R state in a final configuration. The SIR model placed on a network has an epidemic threshold T_c, in which an epidemic commencing from a seed terminates at an early stage for T<T_c, and a seed can cause a global outbreak (an outbreak of a finite relative size) for T>T_c. The order parameter r defined by the mean fraction of the R nodes in final configurations is used to obtain r=0 for T ≤ T_c and r>0 for T > T_c in the limit N →∞. The epidemic threshold depends on the structure of the underlying network. With respect to uncorrelated networks with degree distribution p_k, the local tree approximation gives T_c = ⟨ k ⟩/ ⟨ k(k-1) ⟩ <cit.>, where ⟨·⟩ represents the average of a quantity weighted by p_k. This indicates that it is considerably easier for a global outbreak to occur on heterogeneous networks when compared to homogeneous networks: T_c=0 for fat-tailed scale-free networks (SFNs) of p_k ∝ k^-γ with 2 < γ≤ 3, although T_c = 1/⟨ k ⟩ > 0 for random graphs (RGs) in which the degree distribution obeys p_k ≈⟨ k ⟩^ke^-⟨ k ⟩/k! with the same mean degree as that of the SFNs.This is followed by introducing a prompt quarantine measure with respect to the SIR model. The proposed quarantine measure assumes that node i can be detected (for example, by public health authorities) with a detection probability f immediately after it becomes infected, and the detected node i and its neighbors (except nodes already removed or quarantined) are promptly isolated. The detected and quarantined nodes lose the capability to infect others and to be further infected. It is also assumed that nodes already infected are cured by appropriate treatments when they are isolated. In order to incorporate this quarantine measure, an extended SIR model is considered by introducing the following additional states: detected (D) and quarantined (Q).The complete dynamics is modified as follows:(2^').Randomly select an I node i. Compile a new list of the S neighbors of node i. Randomly select a node j from the list (Fig.<ref>(a)). With probability T, a disease is transmitted from node i to node j; i.e., the state of j is changed to I (Fig.<ref>(b)). Immediately after that, change the state of j to D with probability f (Fig.<ref>(c)). If node j becomes D, then change the state of its S and I neighbors to Q (Fig.<ref>(d)) and go to step (3). If node j is not D, repeat the procedure until the list is empty (Fig.<ref>(e)), and subsequently change the state of node i to R.It should be noted that an I node attempts to infect each of its S neighbors, but this type of a process stops immediately when one of its neighbors becomes D.§ RESULTS§.§ Order Parameter To test the efficiency of the quarantine measure to suppress epidemics, Monte Carlo simulations are performed for the SIR model with the quarantine measure in the two following typical networks: the uncorrelated SFNs with p_k ∝ k^-2.7(k ≥ k_ min=2) that are realized by the configuration model <cit.>, and the RGs with the same mean degree as the SFNs, i.e., ⟨ k ⟩≈ 3.844. The number of nodes is N=10^5. The average of quantities at a given f and T is taken over 10^3 trials × 10^2 graph realizations. The detection probability f is set to f=0.01 and 0.2.Similar simulations are also executed without a control measure, with a random vaccination scheme and with the acquaintance vaccination scheme. In the random vaccination scheme, a fraction of nodes to be vaccinated are randomly selected. In the acquaintance vaccination scheme, a random neighbor of a random node is repeatedly selected for vaccination. In both schemes, nodes are vaccinated prior to the start of an outbreak and the nodes possess perfect immunity such that they never change their state. The fraction of vaccinated nodes is parametrized by f to compare the quarantine and vaccinations. However, it is noted that the actual fraction of D and Q nodes for the quarantine measure does not correspond to f.Figure <ref> plots the mean fraction of the R nodes, r, as a function of T. With respect to the RGs (Fig. <ref> (a)), the quarantine measure outperforms the vaccination schemes in terms of reducing the number of R nodes. This is also applicable for the SFNs (Fig. <ref> (b)). With respect to the SFNs, there are hubs with numerous neighbors through which many chances of becoming infected and infecting other nodes exist. In the quarantine measure, hubs do not appear to leverage their spreading abilities because they can be easily isolated. That is, a hub is quarantined if only one of its numerous neighbors is detected. The fraction of nodes infected once may be adopted as the order parameter for the quarantine measure instead of the fraction of the R nodes. The difference is that the order parameter r does not include the D nodes and Q nodes who were already infected when they were isolated. Nevertheless, the superiority of the quarantine measure is almost unchanged even when such an order parameter is adopted (not shown).Figure <ref>(a) plots the f dependence of r for the SFNs when T is large (T=0.5 and 1.0). The quarantine measure succeeds in reducing outbreak size when compared with that of other vaccination schemes. Figure <ref>(b) plots the mean number of isolated nodes (nodes in the D or Q state) as a function of f. When f is not too small, the number of isolated nodes is fewer than that of vaccinated nodes because epidemics can be immediately detected at an early stage and eradicated by isolations. Specifically, the quarantine measure can contain epidemics even with T=1.0 if f>f_c ≃ 0.4314, where f_c is given by Eq. (<ref>) as derived below. §.§ Epidemic Threshold, Occurrence Probability of Global Outbreaks, and Phase Diagram The epidemic threshold and the occurrence probability of global outbreaks are derived by using a generating function formalism <cit.>. An infinitely large uncorrelated network with degree distribution p_k is assumed. The generating function G_0(x) for the degree distribution p_k is defined as follows: G_0(x) = ∑_k=k_ min^∞ p_k x^k.A node reached by following a randomly selected edge is considered. This node has other k-1 neighbors, whose number is termed as the excess degree, with probability q_k-1 = k p_k/⟨ k ⟩. The generating function G_1(x) for the excess degree distribution q_k is given as follows: G_1(x) = ∑_k=k_ min^∞ q_k-1 x^k-1=∑_k=k_ min^∞k p_k/⟨ k ⟩ x^k-1. This is followed by considering an early stage of an outbreak under the quarantine measure. When an I node is adjacent to an S neighbor, then the state of the neighbor remains as S with probability 1-T, becomes I with probability (1-f)T, and becomes D with probability fT. An I node with k neighbors is changed to Q and subsequent transmissions are not performed, when one of the neighbors becomes D. During transmissions between an I node and k S neighbors, the probability that the k'th neighbor (1 ≤ k' ≤ k) becomes D is fT(1-T+(1-f)T)^k'-1 and the probability that no neighbors become D is (1-T+(1-f)T)^k. Therefore, the generating function F_0(x) for the probability distribution of the number of newly infected neighbors from a randomly chosen I node is as follows: F_0(x)= ∑_k=k_ min^∞ p_k [ ∑_k'=1^k fT (1-T+(1-f) T x)^k'-1 + (1-T +(1-f) T x)^k]=fT 1-G_0(1-T + (1-f) T x)/1-(1-T + (1-f) T x)+G_0(1-T + (1-f) T x).Similarly, F_1(x) denotes the generating function for the probability distribution of the number of newly infected neighbors from an I node that is reached by following a randomly chosen edge as follows: F_1(x)= ∑_k=k_ min^∞k p_k/⟨ k ⟩[ ∑_k'=1^k-1 fT (1-T+(1-f) T x)^k'-1 + (1-T +(1-f) T x)^k-1]=fT 1-G_1(1-T + (1-f) T x)/1-(1-T + (1-f) T x)+G_1(1-T + (1-f) T x).The infections spread only if the mean offspring number F_1'(1) exceeds one, and thus the epidemic threshold T_c(f) is given by the following condition: F_1'(1)=1 1-f/f(1-G_1(1-fT_c))=1.In the limit f → 0, Eq. (<ref>) reduces to a known result for the original SIR model asT_c(0)=1/G_1'(1)=⟨ k ⟩/⟨ k(k-1) ⟩. The probability that a seed induces a global outbreak for T>T_c(f) is also derived. Let P_s denote the probability that a seed induces an outbreak in which s nodes were once infected, and Q_s the probability that a node infected by another node causes the infections of s nodes. Then, the generating functions for P_s and Q_s are given as H_0(x)=∑_s P_s x^s and H_1(x)=∑_s Q_s x^s, respectively. The recursive relations for H_0(x) and H_1(x) are as follows: H_0(x)=x F_0(H_1(x))and H_1(x) = x F_1(H_1(x)).Furthermore, H_0(1)=∑_s P_s denotes the probability that an epidemic that begins from a seed terminates with finite infections, and the occurrence probability of global outbreak is expressed as 1-H_0(1)=1-F_0(v),where v is the solution of v=F_1(v). To check the aforementioned estimate, the N dependence of the order parameter r is considered for the RG (Fig. <ref> (a)) and the SFN (Fig. <ref> (b)). Monte Carlo simulations confirm that r of N nodes approaches zero for T<T_c with increasing N. Figure <ref> plots the probability of global outbreaks given by Eq. (<ref>). In the Monte Carlo simulations, a fraction of samples such that the fraction of R nodes exceeds 1 % is regarded as the occurrence probability of global outbreaks. For both RGs and SFNs, the analytical results coincide well with the numerical results.An evaluation of Eq. (<ref>) plots the phase boundary in the (f,T) plane as shown in Fig. <ref> (a) for the RG and Fig. <ref> (b) for the SFN, respectively. The present quarantine measure is highly effective in increasing the epidemic threshold. Specifically, the quarantine measure increases T_c(f) in the fat-tailed SFNs from zero to a positive value even when the detection probability is small. Expanding Eq. (<ref>) for f ≪ 1 (as shown in Appendix <ref>) results in T_c(f) of uncorrelated SFNs with p_k ∝ k^-γ (k ≥ k_ min) as T_c(f)≃α_γ f^3-γ/γ-2, 2<γ <3,α_3|log f|^-1,γ=3, T_c(0)+β'_γ f^γ-3, 3<γ<4,T_c(0)+β_4 f|log f|,γ=4, T_c(0)+β_γ f, 4<γ, where a_γ and β_γ (β'_γ) denote constants that depend on γ, and T_c(0) is given by Eq. (<ref>). Equation (<ref>) shows that T_c(f)>0 if f>0 even in the fat-tailed SFNs with γ≤ 3, and the deviation T_c(f)-T_c(0) for small f ≪ 1 obeys a power law of f,in which the exponent depends on degree exponent of the underlying network, γ.§.§ Case of Real Networks The above numerical and analytical results on the efficiency of the present quarantine measure were obtained considering uncorrelated networks. It should be noted that uncorrelated networks do not possess certain important properties of realistic contact networks such as a high clustering coefficient, assortativity, and community structure. However, prompt isolations can effectively contain epidemics in more realistic networks. Figures <ref> (a) and <ref> (b) show the Monte Carlo results for the two real networks: the sexual contact network between Brazilian prostitutes and sex buyers <cit.>, and the friendship network of Gowalla users (Gowalla is a location-based social networking website where users share their locations by checking in) <cit.>. Since the data of sexual contact network collected by Rocha et al. <cit.> constitute a time-ordered list, we consider a time-integrated network, where multiple edges between a node pair are accepted. The mean outbreak size is effectively reduced by the quarantine measure when compared with those of the random and acquaintance vaccinations. Thus, the present quarantine measure is expected to hold effective in real contact networks.§ DISCUSSION This study involved investigating the manner in which a prompt quarantine measure suppresses epidemics in networks. The proposed simple and ideal quarantine measure assumed that an individual is detected with detection probability f immediately after it becomes infected, and the detected one and its neighbors are promptly isolated. The efficiency of the proposed measure in suppressing the SIR model in the RGs and the uncorrelated SFNs was numerically tested. Monte Carlo simulations indicated that the quarantine measure outperformed the random and acquaintance vaccination schemes with respect to the reduction of the number of R nodes. The generating function formalism for uncorrelated networks was used to obtain the occurrence probability of global outbreaks and the epidemic threshold T_c. The equation that derives T_c was expanded to show that the epidemic threshold increases to a positive value even in fat-tailed SFNs given a nonzero detection probability. We also show that the proposed quarantine measure is effective in real contact networks.The present study assumed an idealized situation, where quarantines can be executed without delay. In practice, there are time lags among one's infection, detection, and quarantine, due to a number of factors (e.g., the time lag to detections by authorities and the time lag to isolations of infected individuals and their neighbors). Realistic epidemiological study must take into account such delay in quarantine measures. Peak et al. <cit.> investigated the relationship between the effectiveness of quarantine and symptom monitoring, taking into account delay, in containing epidemics and disease dynamics parametrized by seven case-study diseases. They showed that the effectiveness of symptom monitoring and quarantine depends critically on the properties of the infectious disease, such as latent period, infectious period, and transmissibility.Theoretical studies have been devoted to the effectiveness of different delayed isolations. Pereira and Young <cit.> studied the effectiveness of delayed isolations for infected nodes (not including their neighbors) in controlling susceptible-infected-susceptible epidemics to show that the disease is (not) effectively controlled if the delay in isolating infected nodes is shorter (longer) than a certain critical value. Very recently, Strona and Castellano <cit.> considered the SIR model with a quarantine measure, having a delay in the early stage of epidemics, and found the rapid decay in its efficiency; if the implementation is not prompt enough, then the quarantines become highly inefficient. For our case, the effectiveness of quarantines is expected to be weakened when a delay among infection, detection, and quarantine is incorporated. For example, the model can be extended to have a delay time t_ delay for the execution of a quarantine after a node becomes “detected”. In the simplest setting, an infected node i with degree k_i can try to infect further k_ add=min(t_ delay, k_i-k_ D) neighbors after its k_ D-th neighbor becomes detected. Monte Carlo simulations for such cases show that the performance of quarantine strategy actually becomes worse with increasing delay time t_ delay (not shown). The epidemic threshold also decreases as t_ delay increases and reaches the threshold for the random vaccination with the same value of f when t_ delay becomes larger than the largest degree k_ max [After a short consideration, one finds the generating functions F_0(x) and F_1(x) for the case of t_ delay≥ k_ max should be G_0(1-(1-f)T+(1-f)Tx) and G_1(1-(1-f)T+(1-f)Tx), respectively. Then, the epidemic threshold is given from the equation, F_1'(1)=(1-f)T_c ⟨ k(k-1) ⟩/⟨ k ⟩=1, i.e., T_c=(1-f)^-1⟨ k ⟩/⟨ k(k-1) ⟩, which is the epidemic threshold under the random vaccination of f.]. Further investigation of the effect of delayed quarantines is needed, and in order to incorporate delay time properly it should be discussed by using continuous-time infectious disease models. The epidemic model used in the present study corresponds to the discrete-time SIR model. It is naturally expected that the results can be qualitatively applied in the case of a continuous-time SIR model. It will be an interesting future work to investigate the continuous-time SIR model with delayed quarantines, although the results for our prompt quarantine measure highlight the importance of the speed necessary in detecting and quarantining.§ ACKNOWLEDGEMENTS T.H. thanks to Taro Takaguchi for helpful comments. T.H. acknowledges financial support from JSPS (Japan) KAKENHI Grant Numbers JP15K17716, JP16H03939, and JP26310203. T.H. and K.N. acknowledge financial support from JSPS (Japan) KAKENHI Grant Number JP16K05507.31 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL[Pastor-Satorras et al.(2015)Pastor-Satorras, Castellano, Van Mieghem, and Vespignani]pastor2014epidemic authorR. Pastor-Satorras, authorC. Castellano, authorP. Van Mieghem, and authorA. Vespignani, journalReviews of Modern Physics volume87, pages925 (year2015).[Newman(2003)]newman2003structure authorM. E. J. Newman, journalSIAM review volume45, pages167 (year2003).[Barrat et al.(2008)Barrat, Barthélemy, and Vespignani]barrat2008dynamical authorA. Barrat, authorM. Barthélemy, and authorA. Vespignani, titleDynamical processes on complex networks (publisherCambridge University Press, Cambridge, U.K., year2008).[Kermack and McKendrick(1927)]kermack1927contribution authorW. O. Kermack and authorA. G. McKendrick, journalProceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences volume115, pages700 (year1927).[Anderson and May(1992)]anderson1992infectious authorR. M. Anderson and authorR. M. May, titleInfectious Diseases of Humans: Dynamics and Control (publisherOxford University Press, Oxford, U.K., year1992).[Pastor-Satorras and Vespignani(2001)]pastor2001epidemic authorR. Pastor-Satorras and authorA. Vespignani, journalPhysical Review Letters volume86, pages3200 (year2001).[Moreno et al.(2002)Moreno, Pastor-Satorras, and Vespignani]moreno2002epidemic authorY. Moreno, authorR. Pastor-Satorras, and authorA. Vespignani, journalThe European Physical Journal B-Condensed Matter and Complex Systems volume26, pages521 (year2002).[Holme(2004)]holme2004efficient authorP. Holme, journalEPL volume68, pages908 (year2004).[Cohen et al.(2003)Cohen, Havlin, and ben-Avraham]cohen2003efficient authorR. Cohen, authorS. Havlin, and authorD. ben-Avraham, journalPhysical Review Letters volume91, pages247901 (year2003).[Gallos et al.(2007)Gallos, Liljeros, Argyrakis, Bunde, and Havlin]gallos2007improving authorL. K. Gallos, authorF. Liljeros, authorP. Argyrakis, authorA. Bunde, and authorS. Havlin, journalPhysical Review E volume75, pages045104 (year2007).[Miller and Hyman(2007)]miller2007effective authorJ. C. Miller and authorJ. M. Hyman, journalPhysica A: Statistical Mechanics and its Applications volume386, pages780 (year2007).[Chen et al.(2008)Chen, Paul, Havlin, Liljeros, and Stanley]chen2008finding authorY. Chen, authorG. Paul, authorS. Havlin, authorF. Liljeros, and authorH. E. Stanley, journalPhysical Review Letters volume101, pages058701 (year2008).[Dybiec et al.(2004)Dybiec, Kleczkowski, and Gilligan]dybiec2004controlling authorB. Dybiec, authorA. Kleczkowski, and authorC. Gilligan, journalPhysical Review E volume70, pages066145 (year2004).[Dybiec et al.(2005)Dybiec, Kleczkowski, and Gilligan]dybiec2005optimising authorB. Dybiec, authorA. Kleczkowski, and authorC. A. Gilligan, journalActa Physica Polonica B volume36, pages1509 (year2005).[Takeuchi and Yamamoto(2006)]takeuchi2006effectiveness authorF. Takeuchi and authorK. Yamamoto, journalJournal of Theoretical Biology volume243, pages39 (year2006).[Shaban et al.(2008)Shaban, Andersson, Svensson, and Britton]shaban2008networks authorN. Shaban, authorM. Andersson, authorÅ. Svensson, and authorT. Britton, journalMathematical Biosciences volume216, pages1 (year2008).[Oleś et al.(2012)Oleś, Gudowska-Nowak, and Kleczkowski]oles2012understanding authorK. Oleś, authorE. Gudowska-Nowak, and authorA. Kleczkowski, journalPloS One volume7, pagese36026 (year2012).[Karp et al.(2014)Karp, Dybiec, and Kleczkowski]karp2014improving authorP. Karp, authorB. Dybiec, and authorA. Kleczkowski, journalInternational Journal of Modern Physics C volume25, pages1350106 (year2014).[Xu et al.(2014)Xu, Zu, Zheng, Zhang, Xu, and Liu]xu2014comparative authorZ. Xu, authorZ. Zu, authorT. Zheng, authorW. Zhang, authorQ. Xu, and authorJ. Liu, journalPloS One volume9, pagese95911 (year2014).[Bagnoli et al.(2007)Bagnoli, Lio, and Sguanci]bagnoli2007risk authorF. Bagnoli, authorP. Lio, and authorL. Sguanci, journalPhysical Review E volume76, pages061904 (year2007).[Lagorio et al.(2011)Lagorio, Dickison, Vazquez, Braunstein, Macri, Migueles, Havlin, and Stanley]lagorio2011quarantine authorC. Lagorio, authorM. Dickison, authorF. Vazquez, authorL. A. Braunstein, authorP. A. Macri, authorM. V. Migueles, authorS. Havlin, and authorH. E. Stanley, journalPhysical Review E volume83, pages026102 (year2011).[Sahneh et al.(2012)Sahneh, Chowdhury, and Scoglio]sahneh2012existence authorF. D. Sahneh, authorF. N. Chowdhury, and authorC. M. Scoglio, journalScientific Reports volume2, pages632 (year2012).[Wu et al.(2012)Wu, Fu, Small, and Xu]wu2012impact authorQ. Wu, authorX. Fu, authorM. Small, and authorX.-J. Xu, journalChaos: An Interdisciplinary Journal of Nonlinear Science volume22, pages013101 (year2012).[Ruan et al.(2012)Ruan, Tang, and Liu]ruan2012epidemic authorZ. Ruan, authorM. Tang, and authorZ. Liu, journalPhysical Review E volume86, pages036117 (year2012).[Zhang et al.(2014)Zhang, Xie, Tang, and Lai]zhang2014suppression authorH.-F. Zhang, authorJ.-R. Xie, authorM. Tang, and authorY.-C. Lai, journalChaos: An Interdisciplinary Journal of Nonlinear Science volume24, pages043106 (year2014).[Newman(2002)]newman2002spread authorM. E. J. Newman, journalPhysical Review E volume66, pages016128 (year2002).[Rocha et al.(2011)Rocha, Liljeros, and Holme]rocha2011simulated authorL. E. Rocha, authorF. Liljeros, and authorP. Holme, journalPLoS Computational Biology volume7, pagese1001109 (year2011).[Cho et al.(2011)Cho, Myers, and Leskovec]cho2011friendship authorE. Cho, authorS. A. Myers, and authorJ. Leskovec, in booktitleProceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (organizationACM, year2011), pp. pages1082–1090.[Peak et al.(2017)Peak, Childs, Grad, and Buckee]peak2017comparing authorC. M. Peak, authorL. M. Childs, authorY. H. Grad, and authorC. O. Buckee, journalProceedings of the National Academy of Sciences volume114, pages4023 (year2017).[Pereira and Young(2015)]pereira2015control authorT. Pereira and authorL.-S. Young, journalPhysical Review E volume92, pages022822 (year2015).[Strona and Castellano(2017)]strona2017rapid authorG. Strona and authorC. Castellano, journalarXiv preprint arXiv:1706.06321(year2017).§ DERIVATION OF POWER-LAW BEHAVIORS FOR T_CThe epidemic threshold T_c of uncorrelated SFN with p_k ∝ k^-γ (k ≥ k_ min) is considered when f ≪ 1. For the purposes of convenience, it is assumed that k_ min=1. In this case, the generating function for the excess degree distribution G_1(x) isG_1(x)=∑_k=1^∞kp_k/⟨ k⟩x^k-1=1/ζ(γ-1)∑_k=1^∞x^k-1/k^γ-1,where ζ(s) denotes the Riemann ζ function, ζ(x)=∑_k=1^∞ k^-x.To discuss the f dependence of T_c for f ≪ 1, a few properties of G_1(x) are first listed. A function with an integral representation ϕ_s(x)=∫_0^∞u^s-1/^u-xụ=Γ(s)∑_k=1^∞x^k-1/k^sis introduced such that G_1(x) is expressed asG_1(x)=ϕ_γ-1(x)/ζ(γ-1)Γ(γ-1).The function ϕ_s(x) is defined for |x|<1 and s>0 and is related to the polylogarithm function,_s(x)=∑_k=1^∞ x^k k^-s, as follows: xϕ_s(x)=Γ(s)_s(x). The Taylor expansion of ϕ_s(x) is considered. The nth derivative with respect to x, denoted as ϕ_s^(n)(x), is expressed as follows: ϕ_s^(n)(x)=^̣n ϕ_s(x)/x̣^n=n!∫_0^∞u^s-1/(^u-x)^n+1ụ.It should be noted that ϕ_s^(n)(1)=lim_x→ 1-ϕ_s^(n)(x) exists as long as s>n+1. The relation 1/^u-x-δ=1/^u-x+δ/^u-x1/^u-x-δ =∑_m=0^n-1δ^m/(^u-x)^m+1+δ^n/(^u-x)^n1/^u-x-δis used to obtain the Taylor expansion formula for ϕ_s(x) asϕ_s(x+δ)= ∑_m=0^n-1δ^m/m!ϕ_s^(m)(x)+R_s^(n)(x,δ),R_s^(n)(x,δ)= ∫_0^∞u^s-1δ^n/(^u-x)^n(^u-x-δ)ụ.By setting δ=-ϵ<0 and taking the limit x→ 1-, the above expansion givesϕ_s(1-ϵ)= ∑_m=0^n-1(-ϵ)^m/m!ϕ_s^(m)(1)+(-1)^nr_s^(n)(ϵ),r_s^(n)(ϵ)= ∫_0^∞u^s-1ϵ^n/(^u-1)^n(^u-1+ϵ)ụ,which is valid as long as s>n. Now we change the integral variable in the r.h.s. as u=ϵ v:r_s^(n)(ϵ)=∫_0^∞v^s-1ϵ^s+n/(^ϵ v-1)^n(^ϵ v-1+ϵ)ṿ.When the integer n satisfies n<s<n+1, it is possible to evaluate the ϵ dependence by taking the small ϵ limit as follows:lim_ϵ→ 0+r_s^(n)(ϵ)/ϵ^s-1=∫_0^∞v^s-n-1/v+1ṿ.The integral of the r.h.s. exists, and it can be concluded that the following expression is applicable:r_s^(n)(ϵ)∼ϵ^s-1,ϵ→0+.In the marginal case s=n+1, the expression (<ref>) yieldsr_n+1^(n)(ϵ)∼ϵ^n|logϵ|,ϵ→0+.The above expression provides the evaluation of G_1(1-fT_c) for small f ≪1. This is expressed asG_1(1-ϵ)≃ 1-a'_γϵ^γ-2, 2<γ<3,1-a_3|logϵ|,γ=3,1-a_γϵ + b'_γϵ^γ-23<γ<4,1-a_4ϵ + b_4ϵ|logϵ|,γ=4,1-a_γϵ + b_γϵ^2 4<γ,and this leads us to the solution of Eq. (<ref>) as Eq. (<ref>).
http://arxiv.org/abs/1702.08203v2
{ "authors": [ "Takehisa Hasegawa", "Koji Nemoto" ], "categories": [ "physics.soc-ph" ], "primary_category": "physics.soc-ph", "published": "20170227093804", "title": "Efficiency of prompt quarantine measures on a susceptible-infected-removed model in networks" }
Analysis of Low Excitation HDO Transitions Toward the High-Mass Star-forming Regions G34.26+0.15, W51e_1/e_2, and W49N Magda  Kulczak-Jastrzȩbska Received: date / Accepted: date ==================================================================================================================================================empty empty We present a smooth distributed nonlinear control law for local synchronization of identical driftless kinematic agents on a Cartesian product of matrix Lie groups with a connected communication graph. If the agents are initialized sufficiently close to one another, then synchronization is achieved exponentially fast. We first analyze the special case of commutative Lie groups and show that in exponential coordinates, the closed-loop dynamics are linear. We characterize all equilibria of the network and, in the case of an unweighted, complete graph, characterize the settling time and conditions for deadbeat performance. Using the Baker-Campbell-Hausdorff theorem, we show that, in a neighbourhood of the identity element, all results generalize to arbitrary matrix Lie groups. § INTRODUCTIONThe sampled-data setup is ubiquitous in applied control. In the LTI case, the plant may be exactly discretized, and a discrete-time controller can be designed such that closed-loop stability is achieved for non-pathological sampling-periods. Such stability guarantees cannot generally be enforced for nonlinear plants, as nonlinear ODEs generally do not have closed-form solutions. The standard approach to nonlinear sampled-data control design, emulation, is to approximate the discretized plant dynamics. If the sampling-period is sufficiently small, then the actual closed-loop system is stable. This technique has two key shortcomings <cit.>: 1) it may not be possible for a given approximate discretization method, e.g., Euler's method; 2) it relies on fast sampling, which may not be possible due to hardware limitations.The limitations of emulation do not necessarily pose a problem for the class of systems on matrix Lie groups, which are nonlinear, yet have dynamics that yield exact closed-form solutions <cit.>, thereby enabling direct design. To our knowledge, sampled-data control of systems on Lie groups has not yet been explored in the literature. However, the closely related class of bilinear systems has been studied in the discrete-time <cit.> and sampled-data settings <cit.>.Many engineering systems are modelled on Lie groups. The motion of robots in a plane is modelled on 2 <cit.>, and their motion in space, such as that of UAVs', is modelled on 3 <cit.>. Quantum systems evolve on the unitary groups 𝖴(n) <cit.> and 𝖲𝖴(n) <cit.>. Some circuits can be modelled using Lie groups <cit.> while oscillator networks <cit.> evolve on 2.Synchronization of networks on 3 was achieved using passivity in <cit.>. Synchronization under sampling was studied for a network of Kuramoto-like oscillators in <cit.> and harmonic oscillators with a time-varying period in <cit.>, and path following nonlinear agents in <cit.>, but the analyses in these works were not conducted from a Lie-theoretic perspective. The Kuramoto network model was extended from 2 to n in <cit.>. A framework for coordinated motion on Lie groups was developed in <cit.>, where the synchronization problem that we consider is a special case of what the authors call bi-invariant coordination. In <cit.>, linear consensus algorithms were applied to systems on Lie groups in continuous-time. The most salient difference between the current paper and <cit.> is the consideration of the sampled-data setup. Further, in contrast to <cit.>, we take a global perspective, and explore the geometry of the problem to much greater depth.Lie groups are not vector spaces, but their structure facilitates analysis and control design in global coordinates. The Lie structure has been leveraged, for example, for motion tracking on 3 <cit.>, and the control of UAV <cit.> and spacecraft <cit.> orientation on 3. We too take a global perspective in our control design and, when possible, analysis. We present a control law that achieves synchronization for a network of identical agents on any matrix Lie group with driftless dynamics with a connected communication graph. The current paper generalizes and extends the results of <cit.>, which considered only unweighted graphs with agents on one-parameter Lie subgroups. The controller requires that each agent have access to its relative state with respect to each of its neighbours. For example, on 3, relative position and orientation can be attained using machine vision <cit.>. We examine the special case where the error dynamics evolve on a Cartesian product of one-parameter subgroups – a “generalized cylinder” – and the general case. We prove that in both cases, that if the agents are initialized sufficiently close to one another, that synchronization is achieved exponentially fast. For a generalized cylinder, we characterize the performance in the case of an unweighted, complete graph.§.§ Notation and Terminology If N ∈, then _N 1, …, N. Given a matrix M ∈^n × n, M^⊤ is its (non-Hermitian) transpose, and λ_max(M) and λ_min(M) are its its eigenvalues of greatest and least magnitude, respectively. If x ∈^n, then x is its Euclidean norm; if M ∈^n × n, then M is its induced Euclidean norm. Let 1_n ∈^n and 0_n ∈^n denote the column vector of ones and zeros, respectively. Let 0_m × n∈^m × n denote the matrix of zeros. Let ^- denote the set of nonpositive real numbers. Given an equivalence relation ∼ on a set R, and an element x ∈ R, let [x] ∈ R/∼ be the coset containing x.Weighted, directed graphs are used to model communication constraints between agents. A graph 𝒢 is a triple (𝒱, ℰ,w) consisting of a finite set of vertices 𝒱 = _N, a set of edges ℰ⊆𝒱×𝒱, and a weight function w : ℰ→ [0, 1] ⊂. The weight w_ij w((i,j)) is nonzero only if (i, j) ∈ℰ. If agent i has access to its relative state with respect to agent j, then (i,j) ∈ℰ. Define vertex i's neighbour set as 𝒩_i {j ∈_N: (i,j) ∈ℰ}. We assume that 𝒢 has no self-loops. If 𝒢 is unweighted, then for all i ≠ j ∈_N, w_ij∈{0,1}. Associated with 𝒢 is the Laplacian L ∈^N × N, defined elementwise as L_ij = {[-w_ij,i ≠ j,; ∑_j ∈𝒩_iw_ij,i = j. ]. The ith row of the Laplacian L is denoted ℓ_i. § SAMPLED-DATA SYNCHRONIZATION PROBLEMWe consider a network of N controlled agents, each modelled by the differential equation Ẋ_i = X_i(∑_j=1^mB_i,ju_i,j),i ∈_N. Here X_i ∈ where ⊂n, is an m-dimensional connected matrix Lie group over the complex fieldwhich includes, as a special case, real matrix Lie groups. The matrices B_i,j are elements of the Lie algebra , which is a vector space over a fieldequal to eitheror , associated with , and u_i(u_i,1, …, u_i,m) ∈𝔽^m is the control input. Note that the Lie algebra of a complex Lie groupmay in fact be a real vector space. For example, the Lie algebra 𝔰𝔲(2) of the complex Lie group 2 is a vector space over the field of reals despite its vectors being matrices with possibly complex entries. Equation (<ref>) is a kinematic model of a system evolving on a matrix Lie group . Each agent is assumed to be fully actuated in the sense that (∀ i ∈_N)_𝔽B_i,1, …, B_i,m = . Under this assumption, without loss of generality, we take the system (<ref>) to be driftless since the inputs u_i, i ∈_N, can be chosen to cancel any drift vector field. We are interested in the sampled-data control of this multi-agent system in which each agent's control law is implemented on an embedded computer, which we explicitly model using the setup in Figure <ref>. The blocks H and S in Figure <ref> are, respectively, the ideal hold and sample operators. Sample and hold are, respectively, idealized models of A/D and D/A conversion. The following assumption is made throughout this paper. All sample and hold blocks operate at the same period T > 0 and the blocks are synchronized for the multi-agent system (<ref>). ◂ Under Assumption <ref>, letting X_i[k]X_i(kT) and u_i[k]u_i(kT), the discretized dynamics of each agent are given by X_i^+ = X_iexp(T∑_j=1^mB_i,ju_i,j),i ∈_N which is an exact discretization of (<ref>). For each i ∈_N, define Ω_ i ∑_j=1^mB_i,ju_i,j∈. Then the discrete-time dynamics can be compactly expressed as X_i^+ = X_iexp(TΩ_i),i ∈_N. §.§ The Synchronization ProblemGiven a network of N agents with kinematic dynamics (<ref>), we define the error quantities E_ij X_i^-1X_j, i,j ∈_N. Observe that E_ij = I if, and only if, X_i = X_j. The error matrix E_ij is called left-invariant <cit.>, since for all X ∈, (XX_i)^-1(XX_j) = X_i^-1X_j. The class of systems considered hasfor its state space, which is generally not a vector space, so we do not use X_i - X_j as a measure of error.Local Synchronization on Matrix Lie Groups Given a network of N agents with continuous-time dynamics (<ref>), sampling period T > 0 and an unweighted, connected communication graph 𝒢 = (𝒱, ℰ), find, if possible, distributed control laws Ω_i, i ∈_N, such that for all initial errors in a neighbourhood of the identity in ^N, for all i,j ∈_N, E_ij→ I as t →∞. By a distributed control law we mean that for each agent i, the control signal Ω_i can depend on E_ij only if (i, j) ∈ℰ. In this paper we propose the distributed feedback control law Ω_i 1/Tlog((∏_j ∈𝒩_iE_ij^_ij)^1/K) where K ∈ is a gain and the matrix logarithm need not be the principal logarithm. The control law (<ref>) does not require agent i to know agent j's state X_j, nor its own state X_i, but instead requires knowledge of the relative state E_ij. The expression (<ref>) is well-defined so long as the product ∏_j ∈𝒩_iE_ij has no eigenvalues in ^-, as discussed in Section <ref>. This control law is expressed in global coordinates but we only prove local exponential stability of the synchronized state. When the control law (<ref>) is well-defined, the closed-loop discrete-time dynamics are X_i^+ = X_i(∏_j ∈𝒩_iE_ij^_ij)^1/K,i ∈_N and the synchronization error dynamics areE_ij^+= (X_i^+)^-1X_j^+ = (∏_p ∈𝒩_iE_ip^_ip)^-1/KX_i^-1X_j(∏_q ∈𝒩_jE_jq^_jq)^1/K= (∏_p ∈𝒩_iE_ip^_ip)^-1/KE_ij(∏_q ∈𝒩_jE_jq^_jq)^1/K. The order of multiplication in (<ref>) need not be common to all agents or even constant. ▴ The control law (<ref>) is motivated by exponential coordinates for Lie groups, classical consensus algorithms in ^n, and the notion of Riemannian mean of rotations on 3, which on a one-parameter subgroup thereof can be explicitly computed as ∏_i=1^NR_i^1/N <cit.>. A key advantage of direct design over emulation, is that stability can be guaranteed at the sampling instants. As mentioned in the Introduction, on 3, the relative error E_ij can be computed using machine vision, where the speed of sampling is limited by the frame rate of the camera, for example, 25 Hz <cit.>. This limits the feasibility of emulation-based design. However, direct design does not guarantee good performance between sampling instants. But in the specific case of the plant and problem discussed in this paper, achieving synchronization at the sampling instants implies synchronization between the sampling instants. If E_ij[k] = X_i[k]^-1X_j[k] asymptotically approaches I as k →∞, then E_ij(t) = X_i(t)^-1 X_j(t) asymptotically approaches I as t →∞, where X_i(t) and X_j(t) evolve according to (<ref>).If E_ij[k] → I, then the proposed control law (<ref>) satisfies Ω_i[k] →0_n × n. Let 0 < δ < T. Then lim_k →∞E_ij(kT + δ) = lim_k →∞exp(δΩ_i[k])^-1E_ij[k]exp(δΩ_j[k])= lim_k →∞exp(δΩ_i[k])^-1lim_k →∞E_ij[k]lim_k →∞exp(δΩ_j[k])= I^3 = I. Since δ is arbitrary, this implies that E_ij(t) → I. Proposition <ref> means that asymptotically stabilizing the set where E_ij = I, for all i,j ∈_N, at the sample instances is sufficient for solving the synchronization problem. Thus, we can conduct all analysis in the discrete-time setting and do not rely on T being sufficiently small.Our main result is the following theorem, which we prove in Section <ref>. For any Lie groupwith connected communication graph 𝒢, if the gain K of each agent's controller (<ref>) satisfies (<ref>), then the equilibrium {E_ij = I : i,j ∈_N} is locally uniformly exponentially stable.§ PRELIMINARIES §.§ Functions of matrices For every nonsingular matrix X ∈^n × n there are (infinitely many) A ∈^n × n such that exp(A) = X, see <cit.>. Every such matrix A is a non-primary logarithm of X, which we denote by log(X). If, in addition to being nonsingular, the matrix X has no eigenvalues in ^-, then it has a (unique) principal logarithm. Let X ∈^n × n have no eigenvalues in ^-. There is a unique logarithm A ∈^n × n of X, all of whose eigenvalues lie in the strip {z : -π < Im(z) < π}. If X ∈^n × n, then A ∈^n × n. The unique matrix A from Theorem <ref> is called the principal logarithm of X and is denoted (X). Unlike complex numbers, it is not possible to express log(X) as a function of (X) for arbitrary non-singular matrices. If X - I < 1, then(X) = ∑_k=1^∞(-1)^k-1/k (X - I)^k. Any matrix logarithm is a right inverse of the matrix exponential, but not necessarily a left inverse. On a matrix Lie group, the principal logarithmis a left inverse, but only in a neighbourhood of the identity. Choose r > 0 such that (<ref>) converges on X ∈ : X = exp(A), A < r, e.g., r = (2) is a valid choice with any Lie group. Larger values of r may be possible for specific Lie groups. The set U X ∈: X = exp(A), A ∈, A < r is an open neighbourhood of I inin the group topology in which : U → provides an inverse.Borrowing from the definition of complex powers of scalars <cit.> and the form of the square root of a matrix on a Lie group <cit.>, we define the Kth root of a matrix in the following way. Let X ∈^n × n have no eigenvalues in ^-. Given K ∈, the principal Kth root of X is X^1/Kexp(1/K(X)).If X ∈, then X^1/K∈, due to the Lie correspondence : →. If X^1/K is well-defined, then for K ∈, (X^1/K)^K = exp(∑_i = 1^K1/K(X)) = exp((X)) = A. Thus, in this case, X = X^1/KX^1/K⋯ X^1/K (K times), which is the intuitive notion of a Kth root. The somewhat indirect definition (<ref>) allows for Kth roots for K ∈. ▴ Throughout this paper, we use an important algebraic property of the logarithm of a matrix power. If X ∈^n × n has no eigenvalues in ^-, then for α∈ [-1,1], we have (X^α) = α(X). §.§ Exponential Coordinates and One-Parameter SubgroupsGiven a Lie group , a one-parameter subgroup is a continuous morphism of groups ϕ : →. Although this terminology is standard, it is technically the image of the map ϕ that is a subgroup of . The subgroup ϕ() ⊂ is a one-dimensional manifold and there exists a unique H ∈ such that ϕ(θ) = exp(θ H) for all θ∈ <cit.>.To generalize the concept of one-parameter subgroups to higher dimensional manifolds, we consider generalized cylinders. A generalized cylinder is an m-dimensional manifold that is diffeomorphic to 𝕋^k ×^m - k. Such a diffeomorphism exists if and only if there exist m commutative and everywhere-linearly-independent vector fields on the manifold <cit.>. If the manifold is a Lie group , then this simplifies to its Lie algebrahaving a commutative basis.Letbe such a manifold and fix a commutative basis ℋ{H_1,…,H_m} for its Lie algebra . Consider the one-parameter groups ϕ_i : → associated with each H_i. The image of ϕ(t_1,…,t_m) ϕ_1(t_1)ϕ_2(t_2)⋯ϕ_m(t_m) is . Without loss of generality, let ϕ_i, i ∈_k, 0 ≤ k ≤ m have nonzero kernel, and let ϕ_i, i ∈{k + 1,…,m} have zero kernel.Fixing such a basis ℋ, themap induces local coordinates on ∩ U. Given X ∈, by commutativity of H_1,…,H_m, X= exp(t_1 H_1)⋯exp(t_m H_m) = exp(t_1 H_1 + ⋯ + t_m H_m). If X ∈∩ U, then (X) = t_1 H_1 + ⋯ + t_m H_m. Then, by linear independence of H_1,…,H_m, t_1,…,t_m can be uniquely determined, yielding local coordinates (t_1,…,t_m) ∈^m. Thus, a Lie groupcan be locally identified with an open subset of the vector space ^m containing the origin. Note that, by commutativity of , these local coordinates coincide with the familiar exponential coordinates of both the first and second kind. §.§ Properties of the composed flowThe map ϕ : ^m →, defined in the previous section, is critical to our analysis throughout this paper. In this section, we establish important properties of ϕ whenis a generalized cylinder, we then show that these properties hold approximately for any Lie group in a neighbourhood of the identity.By definition, ϕ is surjective onto its image, but it is not necessarily injective. Let p : ^m →^m/(ϕ) be the projection of ^m onto the quotient space ^m/(ϕ). There exists a unique isomorphism of groups ϕ' such that the following diagram commutes <cit.>. ^m [drr]_ϕ[rr]^p ^m/(ϕ) [d]^ϕ' The bijection ϕ' yields alternative global coordinates on the quotient group ^m/(ϕ); it will be used in our characterization of equilibria. Whenis a generalized cylinder, the map ϕ has several important properties. Ifis a generalized cylinder, then ϕ : ^m → is a morphism of groups.It is clear that ϕ(0_m) = exp(0)⋯exp(0) = I.Let _i = (_i^(1),…,_i^(m)) ∈^m and _j = (_j^(1),…,_j^(m)) ∈^m, where ϕ(_i) = X_i and ϕ(_j) = X_j. By commutativity of H_1,…,H_m, ϕ(_i + _j)= exp((t_i^(1) + t_j^(1))H_1)⋯exp((t_i^(m) + t_j^(m))H_m) = exp(t_i^(1)H_1)⋯exp(t_i^(m)H_m)exp(t_j^(1)H_1)⋯exp(t_j^(m)H_m) = ϕ(_i)ϕ(_j). Letbe a generalized cylinder. If K > 0 and ϕ() ∈ U, then ϕ(/K) = ϕ()^1/K.By the commutativity of H_1,…,H_m, ϕ(/K)= exp(t_1/KH_1)⋯exp(t_m/KH_m) = exp(1/K(t_1H_1 + ⋯ + t_mH_m)) = exp(1/K(ϕ())) = ϕ()^1/K. § EQUILIBRIA ON GENERALIZED CYLINDERS Since we consider driftless kinematic models, the system is at equilibrium if, and only if, every agent's input is zero, i.e., for all i ∈_N, Ω_i = 0_n × n. We show that all equilibria are isolated and exhibit the same stability properties.Hereinafter, we use the notation _ijϕ^-1(E_ij) and [ _11^⊤ _12^⊤ ⋯ _1N^⊤ ]^⊤∈^Nm. Define _𝕋 and _ as the projections under p ofonto 𝕋^Nk and ^N(m - k), respectively. If the controller (<ref>) is well-defined, then the equilibria of (<ref>) on a generalized cylinder are characterized by [1/K(L ⊗ I_k)_𝕋] = [0_Nk], (L ⊗ I_m-k)_ = 0_N(m-k), [_11] = [0_m].The sampled dynamics of each agent are given by (<ref>). Therefore, the system is at equilibrium if, and only if, for all i ∈_N, exp(TΩ_i) = I. Ifis a generalized cylinder, then, by commutativity and Definition <ref>, this condition becomes I = exp(TΩ_i) = (∏_j ∈𝒩_iE_ij^_ij)^1/K = ∏_j ∈𝒩_iE_1i^-_ij/KE_1j^_ij/K. In the global coordinates admitted by ϕ' we have ϕ'^-1(I)= ∑_j ∈𝒩_i(ϕ'^-1(E_1i^-_ij/K) + ϕ'^-1(E_1j^_ij/K)) [0_m]= ∑_j ∈𝒩_i([_ij/K_1j] - [_ij/K_1i]) = -[1/K(ℓ_i ⊗ I_m)]. We “stack" the inputs for all agents i, yielding the equation [1/K(L ⊗ I_m)] = [0_Nm], which can be rewritten as [1/K[ L ⊗ I_k 0_Nk × N(m - k); 0_N(m - k) × Nk L ⊗ I_m - k ][ _𝕋;_ ]] = [[ 0_Nk; 0_N(m - k) ]]. By assumption, (ϕ_i) = {0} for i ∈{k+1,…,m}, so the condition [1/K(L ⊗ I_m - k)] = [0_N(m - k)] simplifies to equality on ^N(m - k), rather than congruence on a quotient space. Lastly, since _11 is the error of agent 1 with itself, [_11] = [0_m].On a generalized cylinder, all equilibria are isolated.By assumption, (ϕ_i) = {0}, for all i ∈{k+1,…,m}. Thus /(ϕ_i) ≅. Thus, for j ∈_N, [_1j^(i)] = [0] simplifies to _1j^(i) = 0.By assumption, (ϕ_i) ≠{0}, for all i ∈_k. The map ϕ_i can be viewed as a flow, thus, by <cit.>, there exists a d_i > 0 such that for every r_i ∈ [0], we have r_i = q_id_i for some q_i ∈. Thus, for all j ∈_N, if _1j^(i), _1j^(i)∈ [0], _1j^(i)≠_1j^(i), then |_1j^(i) - _1j^(i)| ≥ d_i.On a generalized cylinder, every equilibrium has the same stability properties as the identity.Let {Ξ_i1,…,Ξ_iN}∈^N be an equilibrium. Define E̅_ijΞ_ij^-1E_ij. Then E̅_ij = I if and only if E̅_ij = Ξ_ij. The error dynamics (<ref>) can be expressed in terms of E̅_ij: Ξ_ijE̅_ij^+= (∏_p ∈𝒩_i(Ξ_1iE̅_1i)^-_ip(Ξ_1pE̅_1p)^_ip)^-1/KΞ_ijE̅_ij×(∏_q ∈𝒩_j(Ξ_1jE̅_1j)^-_jq(Ξ_1qE̅_1q)^_jq)^1/K E̅_ij^+= (∏_p ∈𝒩_iE̅_1i^-_ipE̅_1p^_ip)^-1/KE̅_ij(∏_q ∈𝒩_jE̅_1j^-_jqE̅_1q^_jq)^1/K(∏_p ∈𝒩_iΞ_1i^-_ipΞ_1p^_ip)^-1/K(∏_q ∈𝒩_jΞ_1j^-_jqΞ_1q^_jq)^1/K= (∏_p ∈𝒩_iE̅_ip^_ip)^-1/KE̅_ij(∏_q ∈𝒩_jE̅_jq^_jq)^1/K(∏_p ∈𝒩_iΞ_ip^_ip)^-1/K(∏_q ∈𝒩_jΞ_jq^_jq)^1/K. By (<ref>), the Ξ_ij product terms equal identity, thus the E̅_ij dynamics have the same form as the dynamics (<ref>) of E_ij, and therefore have the same qualitative behaviour. Proposition <ref> says that the dynamics near every equilibrium “look the same”. Thus, by analyzing only the equilibrium at identity, we characterize the behaviour near all equilibria. § SYNCHRONIZATION ON GENERALIZED CYLINDERSIn this section, we consider the case whereis a generalized cylinder. This means that X_i ∈, i ∈_N, which implies E_ij∈, for all i,j ∈_N. This is done only to simplify discussion. The results of this section hold under the weaker assumption that the errors E_ij lie on a generalized cylinder. Any generalized cylinder , on which (<ref>) is well-defined for all forward time, is positively invariant for (<ref>).Let k ∈ be arbitrary and suppose that for all i, j ∈_N, E_ij[k] ∈. Let q ∈_N be arbitrary. By elementary group theory ∏_j ∈𝒩_q E_qj[k] ∈ and therefore, by hypothesis, its Kth root is well-defined. Thus, by definition of log, exp(TΩ_q[k])∈. Since E_ij^+ = exp(-TΩ_i)E_ijexp(TΩ_j),we have E_ij[k+1] ∈. Induction on the time index proves positive invariance of . Using the exponential coordinates from Section <ref> we identify each relative error E_ij∈∩ U with its exponential coordinates _ij∈^m. We henceforth impose that the synchronization errors and their products over neighbour sets be close to the identity. For all i,j ∈_N, we have E_ij∈ U and ∏_j ∈𝒩_iE_ij^w_ij∈ U. ◂ Letbe a generalized cylinder and suppose E_ij[0] ∈, i,j ∈_N.By Proposition <ref>, for all k ≥ 0, E_ij[k] ∈.It follows from its definition that ϕ is a local diffeomorphism in a neighbourhood of the identity element. We first apply the identity that for all i,j ∈_N, E_ij = E_1i^-1E_1j to (<ref>): E_ij^+ = (∏_p ∈𝒩_iE_1i^-w_ipE_1p^w_ip)^-1/KE_ij(∏_q ∈𝒩_jE_1j^-w_jqE_1q^w_jq)^1/K. Applying Proposition <ref> and Lemma <ref> to (<ref>), we have _ij^+= _ij - 1/K∑_p ∈𝒩_i_ip(_1p - _1i) + 1/K∑_q ∈𝒩_j_jq(_1q - _1j) = _ij - 1/K(∑_p ∈𝒩_i_ip_1p - (∑_p ∈𝒩_i_ip)_1i) + 1/K(∑_q ∈𝒩_j_jq_1q - (∑_q ∈𝒩_j_jq)_1j) = _ij + 1/K(ℓ_i ⊗ I_m) - 1/K(ℓ_j ⊗ I_m)= _ij + 1/K((ℓ_i - ℓ_j) ⊗ I_m). Setting i = 1 and “stacking" the last line for all j, we obtain ^+ = ((I_N + 1/K(1_Nℓ_1 - L)) ⊗ I_m) . Thus, the local error dynamics are linear. It is interesting to note that the form of (<ref>) implies that the dynamics on each one-parameter subgroup are decoupled. The eigenvalues of the state matrix in (<ref>) are the m-times-repeated eigenvalues of I + (1_Nℓ_1 - L)/K <cit.>.The linear dynamics (<ref>) are (exponentially) stable if and only if the matrix I + (1_Nℓ_1 - L)/K is Schur. We must therefore establish conditions on the gain K such that all eigenvalues of I + (1_Nℓ_1 - L)/K are in the open unit disc.The Laplacian L of the graph 𝒢 is positive semidefinite, with a zero eigenvalue of algebraic multiplicity equal to the number of connected components in 𝒢 <cit.>; the eigenvector associated with the 0 eigenvalue is 1_N.The spectrum of 1_Nℓ_1 - L equals σ(-L).Let J be the Jordan form of L and let V ∈^N × N be the nonsingular matrix such that J = V^-1LV, where the first column V_1 is in the span of 1_N. We have V^-1(1_Nℓ_1 - L)V = V^-11_Nℓ_1 V - J. Since V_1 is in the span of 1_N and V^-1V = I, we have (V^-11_N)_i = 0 for all i ≠ 1. Also because V_1 is in the span of 1_N, we have (ℓ_1V)_1 = 0. Therefore, V^-11_Nℓ_1 V is strictly upper triangular. Therefore, the eigenvalues of (<ref>) are its diagonal elements, which are the diagonal elements of -J, which are the negatives of the eigenvalues of L.The spectrum of I + (1_Nℓ_1 - L)/K is the image of 1 - σ(L)/K.The result follows from Lemma <ref> and applying the Spectral Mapping Theorem <cit.> using the function f : →, f(x) = 1 - x/K. Since the graph is assumed to be connected, L has a simple eigenvalue at 0. By Lemma <ref>, this eigenvalue gets mapped to 1 in the spectrum of I + (1_Nℓ_1 - L)/K.Let λ be an eigenvalue of L and define the function f(x) = 1 - x/K as in the proof of Lemma <ref>. Applying this function to λ we have f(λ) = 1 - |λ|/K^j∠λ = (1 - |λ|/Kcos(∠λ)) - j|λ|/Ksin(∠λ) For stability, we require f(λ) to be in the open unit disc. The squared magnitude of f(λ) is |f(λ)|^2= (1 - |λ|/Kcos(∠λ))^2 + (|λ|/Ksin(∠λ))^2 = (|λ|/K)^2 - 2|λ|/Kcos(∠λ) + 1 Then |f(λ)|^2 < 1 if, and only if (|λ|/K)^2 - 2|λ|/Kcos(∠λ) < 0. Since we have already addressed the simple eigenvalue at 0, we assume that λ≠ 0. Therefore, dividing by |λ| we seek conditions on K such that, for all λ∈σ(L)\0, 2cos(∠λ) > |λ|/K. This is equivalent to the condition (∀λ∈σ(L)\0)K > |λ|/2cos(∠λ) =|λ|^2/2Re(λ). If (<ref>) holds, then all eigenvalues of I + (1_Nℓ_1 - L)/K, except the single eigenvalue at 1, are in the open unit disc.The next result provides a lower bound on the controller gain K, as a function of the number of agents N, using the properties of the eigenvalues of the Laplacian of a directed graph <cit.>. If K > K_min(N), where K_min(N) {[N/2 N ≤ 9,; 1/8^2(π/2N)(π/N) 10 ≤ N ≤ 18,;N - 1N ≥ 19, ]. then I + (1_Nℓ_1 - L)/K has a single eigenvalue at 1 and all others in the open unit disc.See Appendix. The results of <cit.> allow us to find a tighter bound on K than the Gershgorin Disc Theorem, which is used, for example, in <cit.>. If 𝒢 is symmetric, then σ(L) ⊂ [0,N]. Thus (<ref>) in Lemma <ref> simplifies to K_min(N) = N/2. ▴ By Lemma <ref>, there is no K for which I + (1_Nℓ_1 - L)/K is Schur. However, this does not preclude stability of (<ref>), because the eigenvalue of 1 corresponds to the dynamics of _11, the error of agent 1 with itself, which is identically zero. Letbe a generalized cylinder. If the gain K of each agent's controller (<ref>) satisfies (<ref>), then the equilibrium = 0_Nm of (<ref>) is locally exponentially stableSince E_11(t) = X_1^-1(t)X_1(t) ≡ I, it follows immediately that _11(t) ≡0_m. Therefore the (N - 1)m dimensional subspace 𝒱∈^Nm: _11 = 0_m is invariant under the dynamics (<ref>). As a result, we have σ((I_N + (1_Nℓ_1 - L)/K) ⊗ I_m) = σ((I_N + (1_Nℓ_1 - L)/K) ⊗ I_m | 𝒱) ⊔{1,…,1_m times}, where (I_N + (1_Nℓ_1 - L)/K) ⊗ I_m | 𝒱 is the restriction to the subspace 𝒱. If the gain K of each agent's controller (<ref>) satisfies (<ref>), then by Lemma <ref> (I_N + (1_Nℓ_1 - L)/K) ⊗ I_m | 𝒱 is Schur.By Proposition <ref>, analogous results hold for all equilibria. We emphasize that Theorem <ref> does not rely on Jacobian linearization of the nonlinear dynamics E_ij^+. The system in exponential coordinates evolves according to linear dynamics. § PERFORMANCE WITH AN UNWEIGHTED COMPLETE GRAPH ON A GENERALIZED CYLINDERWe define the ε settling time of error E_ij to be the smallest k∈ such that, for all k ≥k, E_ij[k] = E_ij[0]^α, where |α| ≤ε, 0 < ε < 1. If 𝒢 is complete, then the ε settling time, where ε∈ (0,1), is T_s = ⌈ε/(|K - N|/K)⌉.E_ij^+= (E_ij∏_p ∈ℕ_N ∖{i,j}E_ip)^-1/KE_ij(E_ij^-1∏_p ∈ℕ_N ∖{i,j}E_jp)^1/K = E_ij^K - 2/K(∏_p ∈ℕ_N ∖{i,j}E_ij^-1)^1/K= E_ij^K - 2/KE_ij^-N - 2/K= E_ij^K - N/K Thus E_ij[k] = E_ij[0]^(K - N/K)^k. Therefore, the ε settling time is computed thus |K - N/K|^k = εk = ε/(|K - N|/K) Since k is a time-step, we round up to the nearest integer. The derivative of the settling time with respect to K is ∂ T_s/∂ K = (ε)(|K - N|^2 + K(N - K))/K|K - N|^2((|K - N|/K))^2 = (ε)N(N - K)/K|K - N|^2((|K - N|/K))^2. If K > N, then (<ref>) is positive, so increasing K, i.e., reducing the gain 1/K, delays synchronization, which agrees with intuition. But, interestingly, if K < N, then (<ref>) is negative, so increasing K hastens synchronization. Although (<ref>) is undefined at K = N, these observations suggest that K = N is the minimizer of T_s. If 𝒢 is complete and K = N, then synchronization is achieved at time-step k = 1.Setting K = N in (<ref>), we have E_ij^+ = I.§ GENERAL LIE GROUPS For our purposes, the only difference between a generalized cylinder and any other Lie group is commutativity. Commutativity is the key property yielding Proposition <ref> and Lemma <ref>, from which all subsequent results follow. We now show that in a neighbourhood of the identity of any Lie group , commutativity holds approximately. Which has the very important implication that all our results for generalized cylinders hold mutatis mutandis on any Lie group in a neighbourhood of the identity. In particular, we obtain Theorem <ref>.The Baker-Campbell-Hausdorff (BCH) formula relates the product of two elements on the Lie groupto an analytic function of their principal logarithms. If A,B ∈, then the BCH formula has the series representation: (exp(A)exp(B))= A + B + 1/2[A,B] + 1/12[A,[A,B]] - 1/12[B,[A,B]] + ⋯, where the remaining terms are nested brackets of increasing order <cit.>. We will use (<ref>) to derive a linear approximation of the error dynamics on an arbitrary Lie groupnear the identity, or equivalently, near the origin on the associated Lie algebra . The linearization of the BCH formula at the origin ofis (exp(A)exp(B)) ≈ A + B.All nonlinear terms in (<ref>) are of the form [A,ad_A^k(B)] and [B,ad_B^k(A)], k ∈_≥ 0. Direct computation verifies ∂/∂ A[A,ad_A^k(B)] = [A,∂ad_A^k(B)/∂ A]. Thus, if (<ref>) is linearized at the origin, then all nonlinear terms vanish, and (exp(A)exp(B)) ≈ A + B.Near the identity, we have exp(A + B) ≈exp(A)exp(B) ≈exp(B)exp(A). Thus, commutativity is satisfied approximately in a neighbourhood of the identity of any Lie group . Therefore, all our results for generalized cylinders apply mutatis mutandis to arbitrary matrix Lie groups. Given _i and _j in a sufficiently small neighbourhood of zero, we have ϕ(_i + _j) ≈ϕ(_i)ϕ(_j).Forsufficiently small, ϕ(/K) ≈ϕ(t)^1/K. We state the analogues of key results for generalized cylinders. Their proofs, as well as the proof of Theorem <ref>, are straightforward applications of Corollaries <ref> and <ref>. The equilibrium {E_ij = I : i,j ∈_N} is isolated.Every equilibrium has the same stability properties as the identity. To illustrate how the proofs of these analogues differ, we present the difference between the proofs of Propositions <ref> and <ref>. The proof differs from that of Proposition <ref> in only one line of the arithmetic. By Corollaries <ref> and <ref>, E̅_ij^+≈(∏_p ∈𝒩_iE̅_1i^-_ipE̅_1p^_ip)^-1/KE̅_ij(∏_q ∈𝒩_jE̅_1j^-_jqE̅_1q^_jq)^1/K(∏_p ∈𝒩_iΞ_1i^-_ipΞ_1p^_ip)^-1/K(∏_q ∈𝒩_jΞ_1j^-_jqΞ_1q^_jq)^1/K. The rest of the proof is identical.§ SIMULATIONS §.§ Comparison with Kuramoto Network on SO(2)The Lie group 2 is one dimensional, thus, it is a one-parameter subgroup of n for any n ≥ 2. 2 is the group of rotations in the plane, which can be interpreted locally as a position on the circumference of a circle. Given an element R ∈2, its local coordinate ∈ is often called the “phase” or “angle”. The Kuramoto oscillator is a popular model of synchronization of networks of oscillators. We can view a Kuramoto network of N agents as a control system, where agent i has phase θ_i ∈ with dynamics θ̇_i = u_i,u_i = -∑_j ∈𝒩_ia_ijsin(θ_i - θ_j), where a_ij∈ is the coupling strength between agents i and j. System (<ref>) can be modelled as a system on a Lie group in the form of (<ref>), where R_i = ϕ(θ) = [cos(θ) -sin(θ);sin(θ)cos(θ) ], Ṙ_i = R_i[0 -1;10 ]u_i. We simulate using N = 3 and a_ij = 1 for all i,j ∈_N. It can be shown that with this choice of parameters, that (<ref>) achieves phase synchronization <cit.>. Sampling with period T = 0.1, we see in Figure <ref> that synchronization is preserved under sampling. But in Figure <ref>, we see that sampling with period T = 0.8, that sampling destroys synchronization.We simulate this network again using the proposed controller with K = 2 and T = 0.8.As seen in Figure <ref>, synchronization is achieved at T = 0.8, whereas it was lost using the naïvely discretized Kuramoto coupling.§.§ Deadbeat Performance on SO(2) To illustrate Proposition <ref>, we simulate a network with a complete connectivity graph on 2 with K = N = 40, T = 1 and initial phases evenly spaced from -π/(N + 1) to π/(N + 1): θ_i-π/41 + i2π/1599. As seen in Figure <ref>, all error phases are driven to zero in a single time-step.§.§ Network on SU(2) We simulate a network on 2 to demonstrate Theorem <ref> on a complex, non-commutative Lie group. We simulate a network with N = 6, K = 3.5, and graph Laplacian L = [0.5 -0.1 -0.1 -0.1 -0.1 -0.1;00.8 -0.2 -0.2 -0.2 -0.2;000.9 -0.3 -0.3 -0.3;0000.8 -0.4 -0.4;00000.5 -0.5;000000;] The Pauli matrices constitute the canonical basis of 𝔰𝔲(2): σ_1 = [ 0 j; j 0 ], σ_2 = [0 -1;10 ], σ_3 = [j0;0 -j ]. We use the Pauli matrices to generate the initial conditions: U_i(0) exp(a_iσ_1 + b_iσ_2 + c_iσ_3), where a_i-0.32 + i0.6/(N-1), b_i-0.06 +i0.3/(N - 1), c_i-0.42 + i0.6/(N - 1).For visualization, we plot the Euclidean norms of E_1j - I, j ∈{2,…,N}. As seen in Figure <ref>, the errors tend to identity, thus synchronization is achieved. § FUTURE RESEARCHFuture work includes extending our results to agents with dynamic models and relaxing the assumption that the agents are fully actuated. The latter could first be addressed by assuming that the Lie algebra generated by the input vector fields equals . It would also be of interest to extend our results to time-varying connectivity graphs.IEEEtran§ APPENDIX Write λ∈ in Cartesian form λσ + jω, σ, ω∈. The eigenvalues of the Laplacian of a directed graph lie in the closed interior of the region in , whose boundary is defined the parametrized curves: c_i(σ) σ + jω_i(σ), i ∈_5, and their complex conjugates c̅_i, i ∈{2,3,4} <cit.>, where the ω_i are defined by the loci:* (σ - 1)^2 + ω_1(σ)^2 = (N - 1)^2,* ω_2(σ) = (π/N)σ,* ω_3(σ) = 1/2(π/2N),* ω_4(σ) = (π/N)(N - σ),* (σ + 1 - N)^2 + ω_5(σ)^2 = (N - 1)^2. If N = 2, then this region reduces to the interval [0, N], so K > N/2 implies |f(λ)| < 1 for all λ∈σ(L)∖{0}. If N = 3, then this region reduces to the rhombus with vertices 0, N, and ± jN/2√(3). For 4 ≤ N ≤ 18, the region is a hexagon, as illustrated in Figure <ref>. If N ≥ 19, then the region appears as in Figure <ref>.To lower bound K using only the number of agents N, we maximize the lower bound on K in (<ref>), denoted by g = 0.5|λ|^2/Re(λ) = 0.5(σ^2 + ω^2)/σ, over this region. Since the region is a compact set, the maximum of g is attained at either a critical point or at a point on the boundary of this set. The differential of g is dg= [ ∂ g/∂σ ∂ g/∂ω ]= [ 4σ^2 - 2(σ^2 + ω^2)/4σ^2ω/σ ]= [ 1/2(1 - ω^2/σ^2)ω/σ ], which vanishes nowhere, thus g has no critical points. Therefore, g attains its maximum at a point on the boundary. We parametrize the boundary of the region by σ, and maximize g on this compact, one-dimensional set. Since the Laplacian is a real matrix, its eigenvalues appear in complex conjugate pairs, so it suffices to consider the upper half complex plane.Let σ_ij denote the value of σ at which locus i intersects locus j. Solving ω_i(σ) = ω_j(σ) for σ, we find:* σ_35 = N - 1 - √((N - 1)^2 - (1/2(π/2N))^2),* σ_13 = 1 + √((N - 1)^2 - (1/2(π/2N))^2),* σ_14 = (N - 1)cos(2π/N) + 1 or N,* σ_25 = (N - 1)(1 - cos(2π/N)) or 0,* σ_23 = 1/2(1 + (π/N)),* σ_34 = N - 1/2(1 + (π/N)),* σ_24 = N/2. This boundary is illustrated for 4 ≤ N ≤ 18 in Figure <ref>, and N ≥ 19 in Figure <ref>.The Lie derivatives of g in the direction of c_i are:* L_c_1g(σ) = -N(N - 2)/2σ^2,* L_c_2g(σ) = 1/2(1 + ^2(π/N)),* L_c_3g(σ) = 1/2 - ^2(π/2N)/8σ^2),* L_c_4g(σ) = ^2(π/N)/2 - N^2^2(π/N)/2 σ^2,* L_c_5g(σ) = 0. Let σ_i^⋆ denote the value of σ at which L_c_ig vanishes, which are the critical points of the restriction of g to the boundary. We have* L_c_1g(σ) = 0 if and only if N = 0 or N = 2,* L_c_2g(σ) ≠ 0 for all σ∈, N ∈,* σ_3^⋆ = 1/2(π/2N),* σ_4^⋆ = Ncos(π/N),* L_c_5g(σ) =0 identically. We determine whether σ_i^⋆ is a local maximum or minimum by examining the second Lie derivative of g evaluated at σ_i^⋆: L^2_c_ig(σ) = dL_c_ig(σ)dc_i/dσ= [ ∂^2g/∂σ^2 + ∂^2g/∂ω∂σdω_i/dσ + ∂ g/∂ωd^2ω_i/dσ^2 ∂^2g/∂σ∂ω + ∂^2g/∂ω^2dω_i/dσ ][ 1; dω_i/dσ ]= ω^2/σ^3 - 2ω/σ^2dω_i/dσ + ω/σd^2ω_i/dσ^2 + 1/σ(dω_i/dσ)^2. For i = 3 and i = 4 we have * L^2_c_3g(σ) = ^2(π/2N)/(4σ^3) > 0 for all σ≥ 0, N ≥ 2, * L^2_c_4g(σ) = N^2^2(π/2)/σ^3 >0 for all σ≥ 0, N ≥ 2. Thus, the critical points on loci 3 and 4 are minima, thus the maxima of g on these loci restricted to the boundary are attained at the intersection points, which we now characterize.Let g_i ∈_≥ 0 be the value of g at c_i(σ_i^⋆), let g_ij∈_≥ 0 be the value of g at c_i(σ_ij) = c_j(σ_ij), and let g_N be the value of g at σ = N, ω = 0. Since the value of g is constant on locus 5, we do not consider the intersection points of locus 5 with loci 2 or 3. We have:* g_5 = N - 1,* g_13 = 1 + N(N - 2)/2(1 + √((N - 1)^2 - (1/2(π/N))^2)),* g_14 = 1 + N(N - 2)/2(1 + (N - 1)cos(2π/N)),* g_23 = 1/8^2(π/2N)(π/N),* g_34 = ^2(π/2 N) + (2N - 1 - (π/N))^2/4(2N - 1 - (π/N)),* g_24 = N(^2(π/N)+ 1)/4,* g_N = N/2. Finally, we identify the maximum value of g on the boundary. For N = 3, the boundary is defined by only loci 2 and 4. N = 3 is also the only case in which loci 2 and 4 intersect. In can be shown numerically that for N = 3, that g_N maximizes g. It can be verified numerically that if 4 ≤ N ≤ 9, then g_N is the maximum of g, and if 10 ≤ N ≤ 18, then g_23 is the maximum of g, which proves the first two cases in (<ref>).For N ≥ 19, we now establish that max{g_5, g_13, g_14} = g_5. Notice that g_13 and g_14 can be expressed: g_13 = 1 + N(N - 2)/2σ_13,g_14 = 1 + N(N - 2)/2σ_14. From their definitions, g_5 ≥ g_13 if and only if N - 1 ≥ 1 + N(N - 2)/2σ_13σ_13≥N/2. Similarly, we find that g_5 ≥ g_14 if and only if σ_14 > N/2. By the geometry of the region as discussed in <cit.> and illustrated in Figure <ref>, these inequalities hold for all N ≥ 19.
http://arxiv.org/abs/1702.08524v1
{ "authors": [ "Philip James McCarthy", "Christopher Nielsen" ], "categories": [ "cs.SY", "math.OC" ], "primary_category": "cs.SY", "published": "20170227205346", "title": "Local Synchronization of Sampled-Data Systems on Lie Groups" }
A Unified Approach for Drawdown (Drawup) of Time-Homogeneous Markov Processes David LandriaultDepartment of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON, N2L 3G1, Canada (dlandria@uwaterloo.ca) Bin LiCorresponding Author: Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON, N2L 3G1, Canada (bin.li@uwaterloo.ca) Hongzhong ZhangDepartment of IEOR, Columbia University, New York, NY, 10027, USA (hz2244@columbia.edu) December 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================ Drawdown (resp. drawup) of a stochastic process, also referred as the reflected process at its supremum (resp. infimum), has wide applications in many areas including financial risk management, actuarial mathematics and statistics. In this paper, for general time-homogeneous Markov processes, we study the joint law of the first passage time of the drawdown (resp. drawup) process, its overshoot, and the maximum of the underlying process at this first passage time. By using short-time pathwise analysis, under some mild regularity conditions, the joint law of the three drawdown quantities is shown to be the unique solution to an integral equation which is expressed in terms of fundamental two-sided exit quantities of the underlying process. Explicit forms for this joint law are found when the Markov process has only one-sided jumps or is a Lévy process (possibly with two-sided jumps). The proposed methodology provides a unified approach to study various drawdown quantities for the general class of time-homogeneous Markov processes.Keywords: Drawdown; Integral equation; Reflected process; Time-homogeneous Markov processMSC(2000): Primary 60G07; Secondary 60G4015.5pt§ INTRODUCTION We consider a time-homogeneous, real-valued, non-explosive, càdlàg Markov process X=(X_t)_t≥0 with state space ℝ [The state space can sometimes be relaxed to an open interval of ℝ (e.g., (0,+∞) for geometric Brownian motions). It is also possible to treat some general state space with complex boundary behaviors. However, for simplicity, we choose ℝ as the state space of X in this paper.] defined on a filtered probability space (Ω,ℱ,F=(ℱ_t)_t≥ 0,ℙ) with a complete and right-continuous filtration. Throughout, we silently assume that X satisfies the strong Markov property (see Section III.8,9 of Rogers and Williams <cit.>), and exclude Markov processes with monotone paths. The first passage time of X above (below) a level x∈ ℝ is denoted byT_x^+(-)=inf{t≥0:X_t>(<)x},with the common convention that inf∅=∞. The drawdown process of X (also known as the reflected process of X at its supremum) is denoted by Y=(Y_t)_t≥0 with Y_t=M_t-X_t, where M_t=sup_0≤ s≤ tX_t. Let τ_a=inf{t>0:Y_t>a} be the first time the magnitude of drawdowns exceeds a given threshold a>0. Note that (sup_0≤ s≤ tY_s >a)=(τ_a≤ t) ℙ-a.s. Hence, the distributional study of the maximum drawdown of X is equivalent to the study of the stopping time τ_a. Similarly, the drawup process of X is defined as Ŷ_t=X_t-m_t for t≥0, where m_t=inf_0≤ s≤ tX_t. However, given that the drawup of X can be investigated via the drawdown of -X, we exclusively focus on the drawdown process Y in this paper.Applications of drawdowns can be found in many areas. For instance, drawdowns are widely used by mutual funds and commodity trading advisers to quantify downside risks. Interested readers are referred to Schuhmacher and Eling <cit.> for a review of drawdown-based performance measures. An extensive body of literature exists on the assessment and mitigation of drawdown risks (e.g., Grossman and Zhou <cit.>, Carr et al. <cit.>, Cherny and Obloj <cit.>, and Zhang et al. <cit.>). Drawdowns are also closely related to many problems in mathematical finance, actuarial science and statistics such as the pricing of Russian options (e.g., Shepp and Shiryaev <cit.>, Asmussen et al. <cit.> and Avram et al. <cit.>), De Finetti's dividend problem (e.g., Avram et al. <cit.> and Loeffen <cit.>), loss-carry-forward taxation models (e.g., Kyprianou and Zhou <cit.> and Li et al. <cit.>), and change-point detection methods (e.g., Poor and Hadjiliadis <cit.>). More specifically, in De Finetti's dividend problem under a fixed dividend barrier a>0, the underlying surplus process with dividend payments is a process obtained from reflecting X at a fixed barrier a (the reflected process' dynamics may be different than the drawdown process Y when the underlying process X is not spatial homogeneous). However, the distributional study of ruin quantities in De Finetti's dividend problem can be transformed to the study of drawdown quantities for the underlying surplus process; see Kyprianou and Palmowski <cit.> for a more detailed discussion. Similarly, ruin problems in loss-carry-forward taxation models can also be transformed to a generalized drawdown problem for classical models without taxation, where the generalized drawdown process is defined in the form of Y_t=γ(M_t)-X_t for some measurable function γ(·).The distributional study of drawdown quantities is not only of theoretical interest, but also plays a fundamental role in the aforementioned applications. Early distributional studies on drawdowns date back to Taylor <cit.> on the joint Laplace transform of τ_a and M_τ_a for Brownian motions. This result was later generalized by Lehoczky <cit.> to time-homogeneous diffusion processes. Douady et al. <cit.> and Magdon et al. <cit.> derived infinite series expansions for the distribution of τ_a for a standard Brownian motion and a drifted Brownian motion, respectively. For spectrally negative Lévy processes, Mijatovic and Pistorius <cit.> obtained a sextuple formula for the joint Laplace transform of τ_a and the last reset time of the maximum prior to τ_a, together with the joint distribution of the running maximum, the running minimum, and the overshoot of Y at τ_a. For some studies on the joint law of drawdown and drawup of spectrally negative Lévy processes or diffusion processes, please refer to Pistorius <cit.>, Pospisil et al. <cit.>, Zhang and Hadjiliadis <cit.>, and Zhang <cit.>.As mentioned above, Lévy processes[Most often, one-sided Lévy processes (an exception to this is Baurdoux <cit.> for general Lévy processes)] and time-homogeneous diffusion processes are two main classes of Markov processes for which various drawdown problems have been extensively studied. The treatment of these two classes of Markov processes has typically been considered distinctly in the literature. For Lévy processes, Itô's excursion theory is a powerful approach to handle drawdown problems (e.g., Avram et al. <cit.>, Pistorius <cit.>, and Mijatovic and Pistorius <cit.>). However, the excursion-theoretic approach is somewhat specific to the underlying model, and additional care is required when a more general class of Markov processes is considered. On the other hand, for time-homogeneous diffusion processes, Lehoczky <cit.> introduced an ingenious approach which has recently been generalized by many researchers (e.g., Zhou <cit.>, Li et al. <cit.>, and Zhang <cit.>). Here again, Lehoczky's approach relies on the continuity of the sample path of the underlying model, and hence is not applicable for processes with upward jumps. Also, other general methodologies (such as the martingale approach in, e.g., Asmussen <cit.> and the occupation density approach in, e.g., Ivanovs and Palmowski <cit.>) are well documented in the literature but they strongly depend on the specific structure of the underlying process. To the best of our knowledge, no unified treatment of drawdowns (drawups) for general Markov processes has been proposed in the literature.In this paper, we propose a general and unified approach to study the joint law of (τ_a,M_τ_a,Y_τ_a) for time-homogeneous Markov processes with possibly two-sided jumps. Under mild regularity conditions, the joint law is expressed as the solution to an integral equation which involves two-sided exit quantities of the underlying process X. The uniqueness of the integral equation for the joint law is also investigated. In particular, the joint law possesses explicit forms when X has only one-sided jumps or is a Lévy process (possibly with two-sided jumps). In general, our main result reduces the drawdown problem to fundamental two-sided exit quantities.The main idea of our proposed approach is briefly summarized below. By analyzing the evolution of sample paths over a short time period following time 0 and using renewal arguments, we first establish tight upper and lower bounds for the joint law of (τ_a,M_τ_a,Y_τ_a) in terms of the two-sided exit quantities. Then, under mild regularity conditions, we use a Fatou's lemma with varying measures to show that the upper and lower bounds converge when the length of the time interval approaches 0. This leads to an integro-differential equation satisfied by the desired joint law. Finally, we reduce the integro-differential equation to an integral equation. When X is a spectrally negative Markov process or a general Lévy process, the integral equation can be solved and the joint law of (τ_a,M_τ_a ,Y_τ_a) is hence explicitly expressed in terms of two-sided exit quantities.The rest of the paper is organized as follows. In Section 2, we introduce some fundamental two-sided exit quantities and present several preliminary results. In Section 3, we derive the joint law of (τ_a,Y_τ_a,M_τ_a ) for general time-homogeneous Markov processes. Several Markov processes for which the proposed regularity conditions are met are further discussed. Some numerical examples are investigated in more detail in Section 4. Some technical proofs are postponed to Appendix.§ PRELIMINARY For ease of notation, we adopt the following conventions throughout the paper. We denote by ℙ_x the law of X given X_0=x∈ ℝ and write ℙ≡ℙ_0 for brevity. We write u∧ v=min{u,v}, ℝ _+=[0,∞), and ∫_x^y·dz for an integral on the open interval z∈(x,y).For q,s≥0, u≤ x≤ v and z>0, we introduce the following two-sided exit quantities of X:B_1^(q)(x;u,v) :=𝔼_x[e^-qT_v^+1_{T_v^+<∞, T_v^+<T_u^-,X_T_v^+=v}],B_2^(q)(x,dz;u,v) :=𝔼_x[e^-qT_v^+ 1_{T_v^+<∞,T_v^+<T_u^-,X_T_v^+-v∈dz}],C^(q,s)(x;u,v) :=𝔼_x[e^-qT_u^--s(u-X_T_u^- )1_{ T_u^-<∞, T_u^-<T_v^+}].We also define the joint Laplace transformB^(q,s)(x;u,v):=𝔼_x[e^-qT_v^+-s(X_T_v^+ -v)1_{ T_v^+<∞,T_v^+<T_u^-}]=B_1^(q) (x;u,v)+B_2^(q,s)(x;u,v),where B_2^(q,s)(x;u,v):=∫_0^∞e^-szB_2^(q) (x,dz;u,v).The following pathwise inequalities are central to the construction of tight bounds for the joint law of the triplet (τ_a,M_τ_a,Y_τ_a). For q,s≥0, x∈ℝ and ε∈(0,a), we have ℙ_x-a.s.1_{T_x+ε^+<∞, T_x+ε^+<T_x+ε-a^-}≤1_{T_x+ε^+<∞, T_x+ε^+<τ_a}≤1_{T_x+ε^+<∞, T_x+ε^+<T_x-a^-},ande^-qτ_a-s(Y_τ_a-a)1_{τ_a<∞,τ_a<T_x+ε ^+} ≥ e^-qT_x-a^--s(x-a-X_T_x-a^-)-sε1_{T_x-a^-<∞, T_x-a^-<T_x+ε^+},e^-qτ_a-s(Y_τ_a-a)1_{τ_a<∞, τ_a<T_x+ε ^+} ≤ e^-qT_x+ε-a^- -s(x-a-X_T_x+ε-a^-)1_{T_x+ε-a^-<∞, T_x+ε-a^- <T_x+ε^+}.By analyzing the sample paths of X, it is easy to see that, for any path ω∈(T_x+ε^+<∞), we have _x{τ_a≤ T_x-a^-}=1, so (T_x+ε^+<∞, T_x+ε^+<τ_a)=(T_x+ε^+<∞, T_x+ε^+<τ_a≤ T_x-a^-)⊂(T_x+ε^+<∞,T_x+ε^+<T_x-a^-)ℙ_x-a.s.and similarly, _x-a.s.(T_x+ε^+<∞, T_x+ε^+<T_x+ε-a^-)=(T_x+ε^+<∞, T_x+ε^+< T_x+ε-a^-, T_x+ε^+< τ_a)⊂(T_x+ε^+<∞,T_x+ε^+<τ_a),which immediately implies (<ref>). On the other hand, by using the same argument, we have(T_x-a^-<∞, T_x-a^-<T_x+ε^+)=(T_x-a^-<∞, τ_a≤ T_x-a^-<T_x+ε^+)⊂(τ_a<∞, τ_a<T_x+ε^+)ℙ_x-a.s.and(τ_a<∞, τ_a<T_x+ε^+)=(τ_a<∞, T_x+ε-a^-≤τ _a<T_x+ε^+)⊂(T_x+ε-a^-<∞, T_x+ε-a^-<T_x+ε^+)ℙ_x-a.s.For any path ω∈(T_x-a^-<∞, T_x-a^-<T_x+ε^+), we know from (<ref>) that ω∈(T_x-a^-<∞, τ_a≤ T_x-a^-<T_x+ε ^+). This implies M_τ_a(ω)≤ x+ε and X_τ _a(ω)≥ X_T_x-a^-(ω), which further entails that Y_τ_a(ω)=M_τ_a(ω)-X_τ_a(ω)≤ x+ε-X_T_x-a^-(ω). Therefore, by the above analysis and the second inequality of (<ref>),e^-qT_x-a^--s(x+ε-X_T_x-a^-)1_{ T_x-a^-<∞, T_x-a ^-<T_x+ε^+}≤ e^-qτ_a-sY_τ_a 1_{τ_a<∞,τ_a<T_x+ε^+}ℙ _x-a.s.which naturally leads to (<ref>).Similarly, for any sample path ω∈(τ_a<∞, τ_a<T_x+ε^+), we know from (<ref>) that ω∈(τ_a<∞, T_x+ε-a^-≤τ _a<T_x+ε^+), which implies that x-X_T_x+ε -a^-(ω)≤ Y_T_x+ε-a^-(ω)≤ Y_τ_a (ω). Therefore, by the first inequality of (<ref>), we obtaine^-qτ_a-sY_τ_a1_{τ_a<∞, τ_a<T_x+ε ^+}≤ e^-qT_x+ε-a^--s(x-X_T_x+ε -a^-)1_{T_x+ε-a^-<∞, T_x+ε-a^-<T_x+ε^+}ℙ_x-a.s.This implies the second inequality of (<ref>).By Proposition <ref>, we easily obtain the following useful estimates. For q,s≥0, x∈ ℝ ,z>0 and ε∈(0,a),B_1^(q)(x;x+ε-a,x+ε) ≤𝔼_x[ e^-qT_x+ε^+1_{T_x+ε^+<∞, T_x+ε^+<τ_a ,X_T_x+ε^+=x+ε}]≤ B_1^(q) (x;x-a,x+ε),B_2^(q)(x,dz;x+ε-a,x+ε) ≤𝔼_x[e^-qT_x+ε^+1_{T_x+ε^+<∞, T_x+ε ^+<τ_a,X_T_x+ε^+-x-ε∈dz}] ≤ B_2^(q)(x,dz;x-a,x+ε),ande^-sεC^(q,s)(x;x-a,x+ε)≤𝔼_x[ e^-qτ_a-s(Y_τ_a-a)1_{τ_a<∞,τ_a<T_x+ε ^+}]≤ e^sεC^(q,s)(x;x+ε -a,x+ε).It is not difficult to check that the results of Proposition <ref> and Corollary <ref> still hold if the first passage times and the drawdown times are only observed discretely or randomly (such as the Poisson observation framework in Albrecher et al. <cit.> for the latter). Further, explicit relationship between Poisson observed first passage times and Poisson observed drawdown times (similar as for Theorem <ref> below) can be found by exploiting the same approach as laid out in this paper. The later analysis involves the weak convergence of measures which is recalled here. Consider a metric space S with the Borel σ-algebra on it. We say a sequence of finite measures {μ_n}_n∈ ℕ is weakly convergent to a finite measure μ as n→∞ iflim_n→∞∫_Sϕ(z)dμ_n(z)=∫_S ϕ(z)dμ(z),for any bounded and continuous function ϕ(·) on S.In the next lemma, we show some forms of Fatou's lemma for varying measures under weak convergence. Similar results are proved in Feinberg et al. <cit.> for probability measures. For completeness, a proof for general finite measures is provided in Appendix. Suppose that {μ_n}_n∈ ℕ is a sequence of finite measures on S which is weakly convergent to a finite measure μ, and {ϕ_n}_n∈ ℕ is a sequence of uniformly bounded and nonnegative functions on S. Then,∫_Slim inf_n→∞,w→ zϕ_n(w)d μ(z)≤lim inf_n→∞∫_Sϕ_n(z)dμ _n(z),and∫_Slim sup_n→∞,w→ zϕ_n(w)d μ(z)≥lim sup_n→∞∫_Sϕ_n(z)dμ_n(z). § MAIN RESULTS In this section, we study the joint law of (τ_a,M_τ_a,Y_τ _a) for a general Markov process with possibly two-sided jumps. The following assumptions on the two-sided exit quantities of X are assumed to hold, which are sufficient (but not necessary) conditions for the applicability of our proposed methodology. Weaker assumptions might be assumed for special Markov processes; see, for instance, Remark <ref> and Corollary <ref> below. For all q,s≥0, z>0 and x>X_0, we assume the following limits exist and identities hold:(A1)b_a,1^(q)(x) :=lim_ε↓0 1-B_1^(q)(x;x-a,x+ε)/ε=lim_ε↓01-B_1^(q)(x;x+ε-a,x+ε)/ε=lim_ε↓01-B_1^(q)(x-ε ;x-a,x)/ε=lim_ε↓01-B_1 ^(q)(x-ε;x-ε-a,x)/ε,and ∫_x^yb_a,1^(q)(w)dw<∞ for any x,y∈ ℝ;(A2)b_a,2^(q,s)(x) :=lim_ε↓ 01/εB_2^(q,s)(x;x-a,x+ε)=lim_ε↓01/εB_2^(q,s)(x;x+ε-a,x+ε )=lim_ε↓01/εB_2^(q,s) (x-ε;x-a,x)=lim_ε↓01/ε B_2^(q,s)(x-ε;x-ε-a,x),and s⟼ b_a,2^(q,s)(x) is right continuous at s=0;(A3)c_a^(q,s)(x) :=lim_ε↓0 C^(q,s)(x;x-a,x+ε)/ε=lim_ε↓0C^(q,s)(x;x+ε-a,x+ε)/ε=lim_ε↓0C^(q,s)(x-ε;x-a,x)/ε=lim_ε↓0C^(q,s)(x-ε ;x-ε-a,x)/ε. Under Assumptions (A1) and (A2), it follows from (<ref>) thatb_a^(q,s)(x):=lim_ε↓01-B^(q,s) (x;x-a,x+ε)/ε=b_a,1^(q)(x)-b_a,2^(q,s)(x).Due to the general structure of X, it is difficult to refine Assumptions (A1)-(A3) unless a specific structure for X is given. A necessary condition for Assumptions (A1)-(A3) to hold is that,T_x^+=0 and X_T_x^+=x,ℙ_x-a.s. for all x∈ℝ.In other words, X must be upward regular and creeping upward at every x.[See page 142 and page 197 of <cit.> for definitions of regularity and creeping for Lévy processes.] In the later part of this section, we provide some examples of Markov processes which satisfy Assumptions (A1)-(A3), including spectrally negative Lévy processes, linear diffusions, piecewise exponential Markov processes, and jump diffusions.By Theorem 5.22 of Kallenberg <cit.> or Proposition 7.1 of Landriault et al. <cit.>, we know that Assumption (A2) implies that the measures 1/εB_2 ^(q)(x,dz;x-a,x+ε), 1/εB_2 ^(q)(x,dz;x+ε-a,x+ε), 1/εB_2^(q)(x-ε,dz;x-a,x) and 1/ε B_2^(q)(x-ε,dz;x-ε-a,x) weakly converge to the same measure on ℝ _+, denoted as b_a,2^(q)(x,dz), such that ∫_ ℝ _+e^-szb_a,2^(q)(x,dz)=b_a,2^(q,s)(x). We point out that it is possible that b_a,2^(q)(x,{0})>0, though the measure B_2^(q)(x,dz;u,v) is only defined on z∈(0,∞). We are now ready to present the main result of this paper related to the joint law of (τ_a,Y_τ_a,M_τ_a). Consider a general time-homogeneous Markov process X satisfying Assumptions (A1)-(A3). For q,s≥0 and K∈ℝ, leth(x)=𝔼_x[e^-qτ_a-s(Y_τ_a-a)1_{τ_a<∞, M_τ_a ≤ K}], x≤ K.Then h(·) is differentiable in x<K and solves the following integral equationh(x)=∫_x^Ke^-∫_x^yb_a,1^(q)(w)dw( c_a^(q,s)(y)+∫_[0,K-y)h(y+z)b_a,2^(q)(y,dz)) dy, x≤ K.By the strong Markov property of X, for any X_0=x≤ y<K and 0<ε<(K-y)∧ a, we haveh(y) =𝔼_y[e^-qτ_a-s(Y_τ_a-a)1_{τ_a<∞,τ_a<T_y+ε^+}]+𝔼_y[ e^-qT_y+ε^+1_{T_y+ε^+<∞, T_y+ε^+<τ_a ,X_T_y+ε^+=y+ε}]h(y+ε)+∫_0^K-y-ε𝔼_y[e^-qT_y+ε ^+1_{T_y+ε^+<∞, T_y+ε^+<τ_a,X_T_y+ε^+ -y-ε∈dz}]h(y+ε+z).By Corollary <ref>, it follows thath(y+ε)-h(y) ≥-e^sεC^(q,s)(y;y+ε -a,y+ε)+(1-B_1^(q)(y;y-a,y+ε)) h(y+ε)-∫_0^K-y-εh(y+ε+z)B_2^(q)(y,d z;y-a,y+ε),andh(y+ε)-h(y) ≤-e^-sεC^(q,s)(y;y-a,y+ε )+(1-B_1^(q)(y;y+ε-a,y+ε)) h(y+ε)-∫_0^K-y-εh(y+ε+z)B_2^(q)(y,d z;y+ε-a,y+ε).By Assumptions (A1)-(A3) and h(·)∈0,1], it is clear that both the lower bound of h(y+ε)-h(y) in (<ref>) and the upper bound in (<ref>) vanish as ε↓0. Hence, h(y) is right continuous for y∈ x,K). Replacing y by y-ε in (<ref>) and (<ref>), and using Assumptions (A1)-(A3) again, it follows that h(y) is also left continuous for y∈(x,K] with h(K)=0. Therefore, h(y) is continuous for y∈ x,K] (left continuous at x and right continuous at K).To consecutively show the differentiability, we divide inequalities (<ref>) and (<ref>) by ε. It follows from Assumptions (A1)-(A3), Remark <ref>, Lemma <ref> and the continuity of h thatlim inf_ε↓0h(y+ε)-h(y)/ε≥-c_a^(q,s)(y)+b_a,1^(q)(y)h(y)-lim sup_ε↓ 0∫_0^K-y-εh(y+ε+z)B_2^(q)(y,d z;y-a,y+ε)/ε≥-c_a^(q,s)(y)+b_a,1^(q)(y)h(y)-∫_[0,K-y)h(y+z)b_a,2 ^(q)(y,dz),and similarly,lim sup_ε↓0h(y+ε)-h(y)/ε ≤-c_a^(q,s)(y)+b_a,1^(q)(y)h(y)-∫_[0,K-y)h(y+z)b_a,2 ^(q)(y,dz).Since the two limits coincide, one concludes that h(y) is right differentiable for y∈(x,K). Moreover, by replacing y by y-ε in (<ref>) and (<ref>), and using similar arguments, we can show that h(y) is also left differentiable for y∈(x,K). Since the left and right derivatives coincide, we conclude that h(y) is differentiable for any y∈(x,K) and solves the following ordinary integro-differential equation (OIDE),h^'(y)-b_a,1^(q)(y)h(y)=-c_a^(q,s)(y)-∫_[0,K-y) h(y+z)b_a,2^(q)(y,dz).Multiplying both sides of (<ref>) by e^-∫_x^yb_a,1 ^(q)(w)dw, integrating the resulting equation (with respect to y) from x to K, and using h(K)=0, this completes the proof of Theorem <ref>.When the Markov process X is spectrally negative (i.e., with no upward jumps), the upward overshooting density b_a,2^(q)(x,dz) is trivially 0. Theorem <ref> reduces to the following corollary. Consider a spectrally negative time-homogeneous Markov process X satisfying Assumptions (A1) and (A3). For q,s≥0 and K>0, we have𝔼_x[e^-qτ_a-s(Y_τ_a-a)1_{τ_a<∞, M_τ_a≤ K}]=∫_x^Ke^-∫_x^yb_a,1^(q)(w)dw c_a^(q,s)(y)dy, x≤ K. When X is a general Lévy process (possibly with two-sided jumps), we have the following result for the joint Laplace transform of the triplet (τ_a,Y_τ_a,M_τ_a). Note that Corollary <ref> should be compared to Theorem 4.1 of Baurdoux <cit.>, in which, under the Lévy framework, the resolvent density of Y is expressed in terms of the resolvent density of X using excursion theory. Consider a Lévy process X satisfying Assumptions (A1)-(A3). For q,s,δ≥0, we have[For Lévy processes {τ_a<∞}=1 as long as X is not monotone.]𝔼[e^-qτ_a-s(Y_τ_a-a)-δ M_τ_a] =c_a^(q,s)(0)/δ+b_a^(q,δ)(0).By the spatial homogeneity of the Lévy process X, Eq. (<ref>) at x=0 reduces toh(0)=c_a^(q,s)(0)/b_a,1^(q)(0)(1-e^-b_a,1^(q) (0)K)+∫_0^Ke^-b_a,1^(q)(0)y∫_[0,K-y)h(y+z)b_a,2 ^(q)(0,dz)dy.Letĥ(0):=𝔼[e^-qτ_a-s(Y_τ_a-a)-δ M_τ_a]=𝔼[e^-qτ_a-s(Y_τ_a -a)1_{M_τ_a≤ e_δ}],where e_δ is an independent exponential random variable with finite mean 1/δ>0. Multiplying both sides of (<ref>) by δ e^-δ K, integrating the resulting equation (with respect to K) from 0 to ∞, and using integration by parts, one obtainsĥ(0) =c_a^(q,s)(0)/δ+b_a,1^(q)(0)+∫ _0^∞δ e^-δ K∫_0^Ke^-b_a,1^(q)(0)y ∫_[0,K-y)h(y+z)b_a,2^(q)(0,dz)dydK=c_a^(q,s)(0)/δ+b_a,1^(q)(0)+∫_0^∞e^-b_a,1^(q)(0)ydy∫_ ℝ _+b_a,2^(q)(0,dz)∫_z+y^∞δ e^-δ K𝔼[e^-qτ_a-s(Y_τ_a-a)1_{M_τ_a≤ K-y-z}]dK=c_a^(q,s)(0)/δ+b_a,1^(q)(0)+ĥ(0)∫_ ℝ _+e^-δ zb_a,2^(q)(0,dz)/δ+b_a,1^(q)(0).Solving for ĥ(0) and using (<ref>), it follows thatĥ(0)=c_a^(q,s)(0)/δ+b_a,1^(q)(0)-∫_ ℝ _+e^-δ zb_a,2^(q)(0,dz)=c_a^(q,s)(0)/δ+b_a^(q,δ)(0).It follows from the monotone convergence theorem that (<ref>) also holds for δ=0.We point out that Assumptions (A1)-(A3) are not necessary to yield (<ref>) in the Lévy framework. In fact, by the spatial homogeneity of X, similar to (<ref>) and (<ref>), we havee^-(s+δ)εC^(q,s)(0;-a,ε)/1-e^-δεB^(q,δ)(0;ε-a,ε)≤𝔼[ e^-qτ_a-s(Y_τ_a-a)-δ M_τ_a]≤e^sεC^(q,s)(0;ε-a,ε)/1-e^-δεB^(q,δ)(0;-a,ε),for any ε∈(0,a). Suppose that the following condition holds:lim_ε↓0C^(q,s)(0;-a,ε)/1-e^-δεB^(q,δ)(0;ε-a,ε)=lim_ε↓0C^(q,s)(0;ε-a,ε)/1-e^-δεB^(q,δ)(0;-a,ε):=D_a^(q,s,δ)Then,𝔼[e^-qτ_a-s(Y_τ_a-a)-δ M_τ_a] =D_a^(q,s,δ). Theorem <ref> shows that the joint law 𝔼_x[ e^-qτ_a-s(Y_τ_a-a)1_{M_τ_a≤ K}] is a solution to Eq. (<ref>). Furthermore, the following theorem shows that Eq. (<ref>) admits a unique solution. Suppose that Assumptions (A1)-(A3) hold. For q,s≥0 and K>0, Eq. (<ref>) admits a unique solution.From Theorem <ref>, we know that h(x):=𝔼_x[ e^-qτ_a-s(Y_τ_a-a)1_{τ_a<∞, M_τ_a≤ K}] is a solution of (<ref>). We also notice that any continuous solution to (<ref>) must vanish when x↑ K. For any fixed L∈(-∞,K), we define a metric space (𝔸_L,d _L), where 𝔸_L={f∈ C[L,K],f(K)=0} and the metric d_L(f,g)=sup_x∈ L,K]|f(x)-g(x)| for f,g∈𝔸_L. We then define a mapping ℒ on 𝔸_L byℒf(x)=∫_x^Ke^-∫_x^yb_a,1^(q)(w)d w(c_a^(q,s)(y)+∫_[0,K-y)f(y+z)b_a,2^(q)(y,d z))dy,x∈ L,K],where f∈𝔸_L. It is clear that ℒ(𝔸 _L)⊂𝔸_L.Next we show that ℒ:𝔸_L→𝔸_L is a contraction mapping. By the definitions of the two-sided exit quantities, for any y∈ ℝ, it follows thatC^(q,s)(y;y-a,y+ε)+∫_ ℝ _+B_2^(q)(y,dz;y-a,y+ε)≤1-B_1^(q) (y;y-a,y+ε).Dividing each term in (<ref>) by ε∈(0,a) and letting ε↓0, it follows from Assumptions (A1 )-(A3) that0≤ c_a^(q,s)(y)+∫_ ℝ _+b_a,2^(q)(y,dz)≤ b_a,1^(q)(y), y∈ ℝ .By (<ref>), we have for any f,g∈𝔸_L,d_L(ℒf,ℒg)≤sup _t∈ L,K]| f(t)-g(t)|sup_x∈ L,K] ∫_x^Ke^-∫_x^yb_a,1^(q)(w)dw∫_ ℝ _+b_a,2^(q)(y,dz)dy≤d_L(f,g)sup_L≤ x≤ K∫_x^Ke^-∫_x ^yb_a,1^(q)(w)dwb_a,1^(q)(y)dy≤d_L(f,g)(1-e^-∫_L^Kb_a,1^(q) (w)dw).Since ∫_L^Kb_a,1^(q)(w)dw<∞ by Assumption (A1), one concludes that ℒ:𝔸_L→𝔸_L is a contraction mapping. By Banach fixed point theorem, there exists a unique fixed point in 𝔸_L. By a restriction of domain, it is easy to see that 𝔸_L_1⊂𝔸_L_2 for -∞<L_1<L_2<K. By the arbitrariness of L, the uniqueness holds for the space ∩_L<K𝔸_L. This completes the proof. For the reminder of this section, we state several examples of Markov processes satisfying Assumptions (A1)-(A3). Note that the joint law of drawdown estimates for Examples <ref> and <ref> were solved by Mijatovic and Pistorius <cit.> and Lehoczky <cit.>, respectively (using different approaches). Assumption verifications for Examples <ref> and <ref> are postponed to Appendix. [Spectrally negative Lévy processes] Consider a spectrally negative Lévy process X. Let ψ(s):=1/t log𝔼[e^sX_t] (s≥0) be the Laplace exponent of X. Further, let W^(q): ℝ →0,∞) be the well-known q-scale function of X; see, for instance Chapter 8 of Kyprianou <cit.>. The second scale function is defined as Z^(q)(x)=1+q∫_0^xW^(q)(y)dy. Under some mild conditions (e.g., Lemma 2.4 of Kuznetsov et al. <cit.>), the scale functions are continuously differentiable which further implies that Assumptions (A1) and (A3) hold withb_a,1^(q)(0)=W^(q)'(a)/W^(q)(a) and c_a ^(q,s)(0)=e^saZ_s^(p)(a)W_s^(p)'(a)-Z_s^(p)'(a)W_s^(p)(a)/W_s^(p)(a),where p=q-ψ(s), and W_s^(p) (Z_s^(p)) is the (second) scale function of X under a new probability measure ℙ^s defined by the Radon-Nikodym derivative process .dℙ^s /dℙ| _ℱ_t=e^sX_t-ψ(s)t for t≥0. Therefore, by Corollary <ref> and (<ref>), we have𝔼[e^-qτ_a-s(Y_τ_a-a)-δ M_τ_a] =e^saW^(q)(a)/δ W^(q)(a)+W^(q)'(a)Z_s ^(p)(a)W_s^(p)'(a)-pW_s^(p)(a)^2/W_s^(p)(a),which is consistent with Theorem 3.1 of Landriault et al. <cit.>, and Theorem 1 of Mijatovic and Pistorius <cit.>.[Refracted Lévy processes] Consider a refracted spectrally negative Lévy process X of the formX_t=U_t-λ∫_0^t1_{X_s>b}ds,where λ≥0, b>0, and U is a spectrally negative Lévy process (see Kyprianou and Loeffen <cit.>). Let W^(q) (Z^(q)) be the (second) q-scale function of U, and 𝕎^(q) be the q-scale function of the process {U_t-λ t}_t≥0. Similar to Example <ref>, all the scale functions are continuously differentiable under mild conditions.For simplicity, we only consider the quantity 𝔼_x[ e^-qτ_a1_{τ_a<∞, M_τ_a≤ K}] with b>x-a (otherwise the problem reduces to Example <ref> for X_t=U_t-λ t). By Theorem 4 of Kyprianou and Loeffen <cit.>, one can verify that Assumptions (A1) and (A3) hold. For b>x, from (<ref>) with s=0, we haveb_a,1^(q)(x)=W^(q)'(a)/W^(q)(a) and c_a ^(q,0)(x)=Z^(q)(a)W^(q)'(a)-Z^(q)'(a)W^(q) (a)/W^(q)(a).For x>b>x-a,b_a,1^(q)(x)=(1+λ𝕎^(q)(0)) W^(q)'(a)+λ∫_b-x+a^a𝕎^(q)' (a-y)W^(q)'(y)dy/W^(q)(a)+λ∫_b-x+a^a 𝕎^(q)(a-y)W^(q)'(y)dyandc_a^(q,0)(x)=k_a^(q)(x)/W^(q)(a)+λ∫_b-x+a ^a𝕎^(q)(a-y)W^(q)'(y)dy,wherek_a^(q)(x) =(1+λ𝕎^(q)(0))(Z^(q) (a)W^(q)'(a)-qW^(q)(a)^2) +λ q(1+λ𝕎^(q)(0))∫_b-x+a^a𝕎 ^(q)(a-y)(W^(q)'(a)W^(q)(y)-W^(q)(a)W^(q)'(y))dy-λ q[W^(q)(a)+λ∫_b-x+a^a𝕎 ^(q)(a-y)W^(q)'(y)dy]∫_b-x+a^a𝕎 ^(q)'(a-y)W^(q)(y)dy+λ[Z^(q)(a)+λ q∫_b-x+a^a𝕎 ^(q)(a-y)W^(q)(y)dy]∫_b-x+a^a𝕎 ^(q)'(a-y)W^(q)'(y)dy.By Corollary <ref>, we obtain𝔼_x[e^-qτ_a1_{M_τ_a≤ K}] =∫_x^Ke^-∫_x^yb_a,1^(q)(w)dwc_a^(q,0) (y)dy, x≤ K,which is a new result for the refracted Lévy process (<ref>).[Linear diffusion processes] Consider a linear diffusion process X of the formdX_t=μ(X_t)dt+σ(X_t)dW_t,where (W_t)_t≥0 is a standard Brownian motion, and the drift term μ(·) and local volatility σ(·)>0 satisfy the usual Lipschitz continuity and linear growth conditions. As a special case of the jump diffusion process of Example <ref>, it will be shown later that Assumptions (A1) and (A3) hold for linear diffusion processes. By Corollary <ref>, we obtain𝔼_x[e^-qτ_a1_{τ_a<∞, M_τ_a≤ K}] =∫_x^Ke^-∫_x^yb_a,1^(q)(w)dwc_a^(q,0) (y)dy, x≤ K,which is consistent with Eq. (4) of Lehoczky <cit.>.[Piecewise exponential Markov processes] Consider a piecewise exponential Markov process (PEMP) X of the formdX_t=μ X_tdt+dZ_t,where μ>0 is the drift coefficient and Z=(Z_t)_t≥0 is a compound Poisson process given by Z_t=∑_i=1^N_tJ_i. Here, (N_t )_t≥0 is a Poisson process with intensity λ>0 and J_i's are iid copies of a real-valued random variable J with cumulative distribution function F. We also assume the initial value X_0≥ a which ensures that X_t≥ 0 for all t<τ_a. In this case, as discussed in Remark <ref>, X is upward regular and creeps upward before τ_a. The first passage times of X have been extensively studied in applied probability; see, e.g., Tsurui and Osaki <cit.> and Kella and Stadje <cit.>. For the PEMP (<ref>), semi-explicit expressions for the two-sided exit quantities B_1^(q)(·), B_2^(q)(·,·) and C^(q,s)(·) are given in Section 6 of Jacobsen and Jensen <cit.>. As will be shown in Section <ref>, Assumptions (A1 )-(A3) and Theorem <ref> hold for the PEMP X with a continuous jump size distribution F.[Jump diffusion] Consider a jump diffusion process X of the formdX_t=μ(X_t)dt+σ(X_t)dW_t +∫_-∞^∞γ(X_t-,z)N(dt,dz),where μ(·) and σ(·)>0 are functions on ℝ, (W_t)_t≥0 is a standard Brownian motion, γ(·,·) is a real-valued function on ℝ^2 modeling the jump size, and N(dt,dz) is an independent Poisson random measure on ℝ_+×ℝ with a finite intensity measure dt×ν(dz). For specific μ(·) and σ(·), the jump diffusion (<ref>) can be used to model the surplus process of an insurer with investment in risky assets; see, e.g., Gjessing and Paulsen <cit.> and Yuen et al. <cit.>. We assume the same conditions as Theorem 1.19 of Øksendal and Sulem-Bialobroda <cit.> so that (<ref>) admits a unique càdlàg adapted solution. Under this setup, we show in Section <ref> that Assumptions (A1)–(A3) and thus Theorem <ref> hold for the jump diffusion (<ref>). § NUMERICAL EXAMPLES The main results of Section 3 rely on the analytic tractability of the two-sided exit quantities. To further illustrate their applicability, we now consider the numerical evaluation of the joint law of (Y_τ_a ,M_τ_a) for two particular spatial-inhomogeneous Markov processes with (positive) jumps through Theorem <ref>. For simplicity, we assume that the discount rate q=0 throughout this section. §.§ PEMP In this section, we consider the PEMP X in Example <ref> with μ=1, λ=3, and the generic jump size J with densityp(x)={[1/3e^-x,x>0,; 1/3(e^x+2e^2x),x<0. ].We follow Section 6 of Jacobsen and Jensen <cit.> to first solve for the two-sided exit quantities. Define the integral kernelψ_0(z):=1/z(z+1)(z-1)(z-2), z∈ ℂ ,and the linearly independent functions[ g_1(x):=1/2π√(-1)∫_Γ_1ψ_0(z)e^-xz dz=1/6e^-2x, g_2(x):=1/2π√(-1)∫_Γ_2 ψ_0(z)e^-xzdz=-1/2e^-x,;g_3(x):=1/2π√(-1)∫_Γ_3ψ_0(z)e^-xz dz=1/2,g_4(x):=1/2π√(-1)∫_Γ_4ψ _0(z)e^-xzdz=-1/6e^x, ]for x>0, where Γ_i (i=1,2,3,4) is a small counterclockwise circle centered at the pole μ_i=3-i of ψ_0(z). Moreover, for 0<u<v, we consider the matrix-valued function(M_i,k(u,v))_1≤ i,k≤4:= [ -1/3e^-2u(u+11/6) e^-2u/6e^-2v/18g_1(v);e^-u e^-u/2(u+1/2) -e^-v/4g_2(v);-1/2-1/2 1/2g_3(v); e^u/9e^u/12 e^v/6(v-11/6)g_4(v) ] ,where the matrix M entries are chosen according to{[ M_i,k(u,v)=μ_k/2π√(-1)∫_Γ_iψ_0 (z)/z-μ_ke^-uzdz,1≤ i≤4, k=1,2,;M_i,3(u,v)=|μ_4|/2π√(-1)∫_Γ_iψ _0(z)/z-μ_4e^-vzdz,1≤ i≤4. ].Let (N_k,j(u,v))_1≤ k,j≤4 be the inverse of (M_i,k(u,v))_1≤ i,k≤4. Combining Eq. (46) and a generalized Eq. (48) of Jacobsen and Jensen <cit.> (with ζ=s≥0 and ρ≥0), we obtain the linear system of equations(c_1,c_2,c_3,c_4)(M_i,k)=(-2C/s+2 ,-C/s+1,C/ρ+1,f(v)),where C and C are constants specified later, and f(x) could stand for any of B_1^(0)(x;u,v), B_2^(0,ρ)(x;u,v), or C^(0,s)(x;u,v) and has the representationf(x)=∑_i=1^4c_ig_i(x), x∈ u,v].To solve for B_1^(0)(x;u,v), B_2^(0,ρ)(x;u,v), or C^(0,s)(x;u,v), we only need to solve (<ref>) with different assigned values of C, C, and f(v) according to Eq. (45) of Jacobsen and Jensen <cit.>. By letting C=C=0 and f(v)=1, we obtainB_1^(0)(x;u,v)=∑_i=1^4N_4,i(u,v)g_i(x).Similarly, by letting C=f(v)=0 and C=1, for ρ≥0, we obtainB_2^(0,ρ)(x;u,v)=1/1+ρ∑_i=1^4N_3,i(u,v)g_i(x).A Laplace inversion with respect to ρ yields, for z>0,B_2^(0)(x,dz;u,v)=e^-z∑_i=1^4N_3,i(u,v)g_i (x)dz.By letting C=1 and C=f(v)=0, for s≥0, we obtainC^(0,s)(x;u,v)=∑_i=1^4(-2/s+2N_1,i(u,v)+-1/s+1N_2,i(u,v))g_i(x).By the definitions, we haveb_a,1^(0)(x) =-∑_i=1^4D_4,i(x-a,x)g_i(x),b_a,2^(0)(x,dz) =e^-z(∑_i=1^4D_3,i (x-a,x)g_i(x))dz,c_a^(0,s)(x) =∑_i=1^4(-2/s+2D_1,i (x-a,x)+-1/s+1D_2,i(x-a,x))g_i(x),where we denote D_k,j(u,v):=∂/∂ vN_k,j(u,v).In Figure <ref> below, we useto numerically solve the integral equation (<ref>). §.§ A jump diffusion model In this section, we consider a generalized PEMP (X_t)_t≥0 with diffusion whose dynamics is governed bydX_t=X_tdt+√(2)dW_t+dZ_t, t>0,where the initial value X_0=x∈ ℝ, (W_t)_t≥0 is a standard Brownian motion, and (Z_t)_t≥0 is an independent compound Poisson process with a unit jump intensity and a unit mean exponential jump distribution. The two-sided exit quantities of this generalized PEMP can also be solved using the approach described in Sections 6 and 7 of Jacobsen and Jensen <cit.>.We define an integral kernelψ_1(z)=e^z^2/2/z(z+1), z∈ ℂ .Let Γ_i (i=1,2) be small counterclockwise circles around the simple poles μ_1=0 and μ_2=-1, respectively, and define the linearly independent functionsg_1(x) :=1/2π√(-1)∫_Γ_1ψ_1(z)e^-xz dz=1,g_2(x) :=1/2π√(-1)∫_Γ_2ψ_1(z)e^-xz dz=-e^x+1/2,for x∈ ℝ. To find another linearly independent partial eigenfunction, we consider the vertical line Γ_3={1+t√(-1),t∈ ℝ } and defineg_3(x):=1/2π√(-1)∫_Γ_3ψ_1(z)e^-xz dz.Next we derive an explicit expression for g_3(x). We know from (<ref>) that lim_x→∞g_3(x)=0 and g_3 is continuously differentiable withg_3^'(x)=-1/2π√(-1)∫_Γ_3e^z^2/2/z+1e^-xzdz.Notice that the bilateral Laplace transform functions (e.g., Chapter VI of <cit.>) of a standard normal random variable U_1 and an independent unit mean exponential random variables U_2 are given respectively by∫_-∞^∞e^-zy·1/√(2π)e^-y^2/2 dy=e^z^2/2,∫_0^∞e^-zy· e^-ydy=1/z+1,for all complex z such that (z)≥0. Hence, the bilateral Laplace transform of the density function of U_1+U_2, i.e.,∫_0^∞1/√(2π)e^-(x-y)^2/2e^-y dyis given by e^z^2/2/(z+1) for all complex z such that (z)≥0. Since the right hand side of (<ref>) is just the Bromwich integral for the inversion of the bilateral Laplace transform -e^z^2/2/(z+1), evaluated at -x, we deduce thatg_3^'(x)=-∫_0^∞1/√(2π)e^-(x+y)^2 /2e^-ydy.It follows thatg_3(x)=-∫_x^∞g_3^'(y)dy=1-∫_0^∞N(x+y)e^-ydy.where N(·) is the cumulative distribution function of standard normal distribution.For any fixed -∞<u<v<∞, we define a matrix-valued function(M_i,k(u,v))_1≤ i,k≤3:= [1 g_1(v) g_1(u); ve^v+1/2 g_2(v) g_2(u); 1-∫_0^∞N(v+y)ye^-ydy g_3(v) g_3(u) ] ,where the first row is computed according toM_i,1(u,v)=1/2π√(-1)∫_Γ_iψ_0(z)/z+1e^-vzdz.Notice that M_3,1(u,v) can be calculated in the same way as g_3(x). We also denote by (N_k,j(u,v))_1≤ k,j≤3 the inverse of (M_i,k (u,v))_1≤ i,k≤3.By Eq. (46) and a generalized Eq. (48) of Jacobsen and Jensen <cit.> (with ζ=s=0 and ρ≥0), we obtain the linear system of equations(c_1,c_2,c_3)(M_i,k)=(C/ρ+1 ,f(v),f(u)),where C is a constant specified later, and f(x) could stand for any of B_1^(0)(x;u,v), B_2^(0,ρ)(x;u,v), or C^(0,0)(x;u,v) and has the representationf(x)=∑_i=1^3c_ig_i(x), x∈ u,v].By letting (1) C=f(u)=0 and f(v)=1, (2) C=1 and f(v)=f(u)=0, (3) C=f(v)=0 and f(u)=1, for any ρ≥0 and z>0, and solving the linear system (<ref>), we respectively obtainB_1^(0)(x;u,v) =∑_i=1^3N_2,i(u,v)g_i(x),B_2^(0,ρ)(x;u,v) =1/1+ρ∑_i=1^3N_1,i (u,v)g_i(x), B_2^(0)(x,dz;u,v)=e^-z∑_i=1^3 N_1,i(u,v)g_i(x)dz,C^(0,0)(x;u,v) =∑_i=1^3N_3,i(u,v)g_i(x).Furthermore, this impliesb_a,1^(0)(x) =-∑_i=1^3D_2,1(x-a,x)g_i(x),b_a,2^(0)(x,dz) =e^-z(∑_i=1^3D_1,i (x-a,x)g_i(x)),c_a^(0,0)(x) =∑_i=1^3D_3,i(x-a,x)g_i(x),where we denote D_k,j(u,v)=∂/∂ vN_k,j(u,v).In Figure 2 below, we plot h(x)=ℙ_x{M_τ_a≤ K} by numerically solving the integral equation (<ref>) using .§ ACKNOWLEDGMENTS The authors would like to thank two anonymous referees for their helpful comments and suggestions. Support from grants from the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged by David Landriault and Bin Li (grant numbers 341316 and 05828, respectively). Support from a start-up grant from the University of Waterloo is gratefully acknowledged by Bin Li, as is support from the Canada Research Chair Program by David Landriault. § APPENDIX§.§ Proof of Lemma <ref> We define ψ _n(z)=inf_m≥ nϕ _m(z) for z∈ S. Further, we define ψ_n(z)=lim inf_w→ zψ _n(w) which is lower semi-continuous (see, e.g., Lemma 5.13.4 of Berberian <cit.>). Note that ψ_n is increasing in n, and by the definition of ψ_n, we have lim_n→∞ψ_n(z)= lim_n→∞lim_r↓ 0inf_w∈ (z-r,z+r)inf_m≥ nϕ _m(w)= lim_n→∞lim_r↓ 0inf_m≥ n,w∈ (z-r,z+r)ϕ _m(w)≡lim inf_n→∞ ,w→ zϕ _n(w),where the second equality is because there is no ambiguity in switching the order of two infimums.By the monotone convergence theorem, we have ∫_Slim inf_n→∞ ,w→ zϕ _n(w)d μ (z)=lim_n→∞∫_Sψ_n(z)d μ (z).By Portmanteau theorem of weak convergence and the fact that ψ_n(z) is nonnegative and lower semi-continuous, it follows that ∫_Sψ_n(z)dμ (z)≤lim inf_m→∞∫_Sψ_n(z)dμ _m(z)for any n∈ℕ. Moreover, since ψ _n(z) is monotone increasing in n, we havelim inf_m→∞∫_Sψ_n(z)dμ _m(z)≤lim inf_m→∞∫_Sψ_m(z) dμ _m(z).By (<ref>)-(<ref>),∫_Slim inf_n→∞ ,w→ zϕ _n(w)d μ (z)≤lim inf_m→∞∫_Sψ_m(z) dμ _m(z)≤lim inf_m→∞∫_Sϕ _m(z) dμ _m(z),where the last inequality is due to ψ_m(z)≤ψ _m(z)≤ϕ _m(z).Suppose that {ϕ _n}_n∈ ℕ is uniformly bounded by K>0, by applying (<ref>) to {K-ϕ _n}_n∈ ℕ, we obtain Kμ (S)-∫_Slim sup_n→∞ , w→ zϕ _n(w)dμ (z)=∫_Slim inf_n→∞, w→ z(K-ϕ _n(w))dμ (z) ≤lim inf_n→∞∫_S(K-ϕ _n(z))dμ _n(z)=Klim inf_n→∞μ _n(S)-lim sup_n→∞∫_Sϕ _n(z)dμ _n(z).Therefore, inequality (<ref>) follows immediately by the weak convergence of μ _n and μ (S)<∞. §.§ Assumption verification for Example <ref>Consider the PMEP (<ref>) with a continuous jump size distribution F(·). For q,s≥0 and 0<u_0<x_0<v_0, we havelim_(u,v)↓(u_0,v_0)g(x_0;u,v)=lim_(x,u)↑ (x_0,u_0)g(x;u,v_0)=g(x_0,u_0,v_0),where the function g(x;u,v) is any of the following three functions: B_1^(q)(x;u,v), B_2^(q,s)(x;u,v) and C^(q,s)(x;u,v).Note that the condition 0<u_0<x_0<v_0 is to ensure the process X remains positive before exiting these finite intervals, which further implies X is upward regular and creeps upward. We limit our proof tolim_(u,v)↓(u_0,v_0)B_1^(q)(x_0;u,v)=B_1^(q) (x_0;u_0,v_0).The other results can be proved in a similar manner. By the relationship v>v_0>u>u_0, we have| B_1^(q)(x_0;u_0,v_0)-B_1^(q)(x_0;u,v)|≤|𝔼_x_0[e^-qT_v_0^+1_{T_v_0 ^+<T_u_0^-,X_T_v_0^+=v_0}]-𝔼_x_0 [e^-qT_v^+1_{T_v^+<T_u^-,X_T_v^+=v,X_T_v_0 ^+=v_0}]|+ℙ_x_0{v_0<X_T_v_0^+≤ v}.It is clear that the last term of (<ref>) vanishes as v↓ v_0 by the right-continuity of the distribution function of X_T_v_0 ^+. Also,|𝔼_x_0[e^-qT_v_0^+1_{T_v_0 ^+<T_u_0^-,X_T_v_0^+=v_0}]-𝔼_x_0 [e^-qT_v^+1_{T_v^+<T_u^-,X_T_v^+=v,X_T_v_0 ^+=v_0}]|=𝔼_x_0[e^-qT_v_0^+1_{T_v_0^+<T_u ^-,X_T_v_0^+=v_0}]-𝔼_x_0[ e^-qT_v^+1_{T_v^+<T_u^-,X_T_v^+=v,X_T_v_0^+ =v_0}] +𝔼_x_0[e^-qT_v_0^+1_{T_u^-<T_v_0 ^+<T_u_0^-,X_T_v_0^+=v_0}] ≤1-𝔼_v_0[e^-qT_v^+1_{T_v^+<T_u ^-,X_T_v^+=v}]+ℙ_x_0{T_u^- <T_v_0^+<T_u_0^-}.Let ζ be the time of the first jump of the compound Poisson process Z with jump rate λ>0. Note that X will increase continuously up to time ζ as long as the initial value is positive. Since v>v_0>0, we have1-𝔼_v_0[e^-qT_v^+1_{T_v^+<T_u^- ,X_T_v^+=v}]≤1-𝔼_v_0[e^-qT_v^+ 1_{ζ>T_v^+}]=1-( v/v_0)^-(q+λ)/μ.By conditioning on X_T_u^--, one obtainsℙ_x_0{T_u^-<T_v_0^+<T_u_0^-} ≤∫_u^v_0ℙ_x_0{X_T_u^--∈dy}ℙ{y-u<J≤ y-u_0}≤max_u_0≤ y≤ v_0(F(y-u_0)-F(y-u)).Since F(·) is continuous, and hence uniformly continuous for y∈0,v_0-u_0], it follows that the right-hand side of (<ref>) vanishes as u↓ u_0. From (<ref>)–(<ref>), we conclude that (<ref>) holds.Note that although (<ref>) only uses the continuity of F on [0,∞), the proof for C^(q,s)(x;u,v) will use the continuity of F on (-∞,0].Assumptions (A1)-(A3) hold for the piecewise exponential Markov process (<ref>) with a continuous jump size distribution F(·) and initial value X_0≥ a.For 0<u<x<v, by the strong Markov property, we haveB_1^(q)(x;u,v) =𝔼_x[e^-qT_v^+1_{T_v ^+<T_u^-,X_T_v^+=v,ζ>T_v^+}]+𝔼 _x[e^-qT_v^+1_{T_v^+<T_u^-,X_T_v^+ =v,ζ<T_v^+}] =(v/x)^-(q+λ)/μ+λ∫_0^1/μlnv/xe^-(q+λ)tdt∫_u-xe^μ t ^v-xe^μ tB_1^(q)(xe^μ t+w;u,v)F(dw).By Lemma <ref>, Eq. (<ref>), and the dominated convergence theorem, it is straightforward to verify that Assumption (A1) holds and for x>a,b_a,1^(q)(x)=q+λ/μ x-λ/μ x∫_-a ^0B_1^(q)(x+w;x-a,x)F(dw).Note that we require x>a as otherwise x+w in the above equation could be negative for w∈(-a,0), and then Lemma <ref> does not apply. Obviously, ∫_x^yb_a,1^(q)(w)dw<∞ for all 0<x<y<∞. Similarly, by conditioning on the first jump of Z, for 0<u<x<v,B_2^(q)(x,dz;u,v) =λ∫_0^1/μlnv/xe^-(q+λ)tF(v-xe^μ t+dz)dt+λ∫_0^1/μlnv/xe^-(q+λ)t dt∫_u-xe^μ t^v-xe^μ tB_2^(q)(xe^μ t+w,dz;u,v)F(dw),andC^(q,s)(x;u,v)=λ∫_0^1/μlnv/xe^-(q+λ )tdt∫_-∞^v-xe^μ tC^(q,s)(xe^μ t +w;u,v)F(dw),where it is understood that C^(q,s)(xe^μ t+w;u,v)=e^s(xe^μ t +w-u) for w<u-xe^μ t. One can verify from Lemma <ref> and the dominated convergence theorem that Assumptions (A2) and (A3) hold, and for x>a,b_a,2^(q)(x,dz)=λ/μ xF(dz)+λ/μ x∫_-a^0B_2^(q) (x+w,dz;x-a,x)F(dw),andc_a^(q,s)(x)=λ/μ x∫_-∞^0C^(q,s) (x+w;x-a,x)F(dw).This ends the proof.§.§ Assumption verification for Example <ref> Let U be the continuous component of X, which is a linear diffusion process with the infinitesimal generator ℒ_U=1/2 σ^2(y)d^2/dy^2+μ(y)d /dy. It is well-known that, for any q>0, there exist two independent and positive solutions, denoted as ϕ_q^±(y), to the Sturm-Liouville equationℒ_Uϕ_q^±(y)=qϕ_q^±(y),where ϕ_q^+(·) is strictly increasing and ϕ_q^-(·) is strictly decreasing. By the Lipschitz assumption on μ(·) and σ(·), it follows from the Schauder estimates (e.g., Theorem 6.14 of Gilbarg and Trudinger <cit.>) of Eq. (<ref>) that ϕ_q^±(·)∈ C^2,α(Ω̅) for any α∈(0,1] and any compact set Ω̅⊂ ℝ. Interested readers can refer to Section 4.1 of Gilbarg and Trudinger <cit.> for more detail on the Hölder space C^2,α(Ω̅).We denote the first hitting time of U to level z∈ ℝ by H_z=inf{t>0:U_t=z}. It is well-known that, for u≤ x≤ v,𝔼_x[e^-qH_u1_{H_u<H_v}] =f_q(x,v)/f_q(u,v)and𝔼_x[ e^-qH_v1_{H_v<H_u}]=f_q(u,x)/f_q(u,v),where f_q(x,y):=ϕ_q^+(x)ϕ_q^-(y)-ϕ_q^+(y)ϕ_q ^-(x). Note that f_q(x,y) is strictly decreasing in x and strictly increasing in y with f_q(x,x)=0. In particular, for u≤ x≤ v, we have𝔼_x[e^-qH_u]=ϕ_q^-(x)/ϕ _q^-(u)and𝔼_x[e^-qH_v] =ϕ_q^+(x)/ϕ_q^+(v).For 𝐞_q an independent exponential random variable with mean 1/q<∞, the q-potential measure of U is given byr_q(x,y):=1/qℙ_x{U_𝐞_q∈dy}/dy={[ 2/qσ^2(y)ϕ_q^+(x)ϕ_q^-(y)/f_q,1 (y,y), x≤ y,;2/qσ^2(y)ϕ_q^+(y)ϕ_q^-(x)/f_q,1 (y,y), x>y, ].where f_q,1(x,y):=∂/∂ xf_q(x,y). Furthermore, the q-potential measure of U killed on exiting the interval [u,v], for u≤ x,y≤ v, is given byθ^(q)(x,y;u,v) :=1/qℙ_x(U_𝐞 _q∈dy,𝐞_q<H_u∧ H_v)/d y=r_q(x,y)-f_q(x,v)/f_q(u,v)r_q(u,y)-f_q(u,x)/f_q(u,v)r_q(v,y).The next lemma is an analogy of Lemma <ref>. Thanks to the diffusion term in the jump diffusion model (<ref>), we now allow for the presence of atoms in the jump intensity measure ν(·). Consider the jump diffusion model (<ref>). For q,s≥0 and u_0<x_0<v_0, we havelim_(u,v)↓(u_0,v_0)g(x_0;u,v)=lim_(x,u)↑ (x_0,u_0)g(x;u,v_0)=g(x_0,u_0,v_0),where g(x;u,v) is any of the following functions: B_1^(q)(x;u,v), B_2^(q,s)(x;u,v) and C^(q,s)(x;u,v).We can follow the same proof as Lemma <ref> except for the term ℙ_x_0{T_u^-<T_v_0^+<T_u_0^-} in (<ref>), which will be handled distinctly here. We have X_t=U_t a.s. for t<ζ, where ζ is the first time a jump occurs which follows an exponential distribution with mean 1/λ=1/ν(ℝ)>0. For any u_0<u<x_0<v_0, by (<ref>) and (<ref>), we haveℙ_x_0{T_u^-<T_v_0^+<T_u_0^-} ≤ℙ_u{T_v_0^+<T_u_0^-}=ℙ_u{T_v_0^+<T_u_0^-,ξ>T_v_0 ^+}+ℙ_u{ξ≤ T_v_0^+<T_u_0 ^-}≤𝔼_u[e^-λ H_v_01_{H_v_0 <H_u_0}]+1-𝔼_u[e^-λ H_u_0 ] =f_q(u_0,u)/f_q(u_0,v_0)+1-ϕ_q^-(u)/ϕ_q^-(u_0).Therefore, it follows that lim_u↓ u_0ℙ_x_0{ T_u^-<T_v_0^+<T_u_0^-}=0 by f_q(u_0,u_0 )=0. Assumptions (A1)-(A3) hold for the jump diffusion model (<ref>).By the strong Markov property, (<ref>) and (<ref>), for u<x<v, it follows thatB_1^(q)(x;u,v)=𝔼_x[e^-qT_v^+1_{T_v^+<T_u^-,T_v ^+=v,ζ>T_v^+}]+𝔼_x[e^-qT_v^+ 1_{T_v^+<T_u^-,T_v^+=v,ζ<T_v^+}] =𝔼_x[e^-(q+λ)H_v1_{H_v<H_u}] +∫_u^v𝔼_x[e^-qζ1_{ζ<H_u∧ H_v,U_ζ∈dy}]∫_ℝB_1^(q) (y+γ(y,w);u,v)ν(dw)/λ=f_q+λ(u,x)/f_q+λ(u,v)+∫_u^vθ ^(q+λ)(x,y;u,v)dy∫_ℝB_1^(q)(y+γ (y,w);u,v)ν(dw),where it is understood that B_1^(q)(y+γ(y,w);u,v)=0 if γ(y,w)>v-y or γ(y,w)<u-y. By Lemma <ref>, the dominated convergence theorem, and the identity f_q+λ(u,v)=-f_q+λ (v,u), we can verify that Assumption (A1) holds withb_a,1^(q)(x)=-f_q+λ,1(x-a,x)/f_q+λ(x-a,x) -∫_x-a^xθ̃_a^(q+λ)(x,y)dy∫ _ℝB_1^(q)(y+γ(y,w);x-a,x)ν(dw),where we write θ̃_a^(q+λ)(x,y):=-f_q+λ ,1(x-a,x)/f_q+λ(x-a,x)r_q+λ(x,y)-r_q+λ,1 (x,y)+f_q+λ,1(x,x)/f_q+λ(x-a,x)r_q+λ(x-a,y) and r_q+λ,1(x,y):=∂/∂ xr_q+λ(x,y). The integrability of b_a,1^(q)(·) follows from the continuity of the ϕ_q^+(·) and ϕ_q^-(·).Similarly, by the strong Markov property of X, (<ref>) and (<ref>), we haveB_2^(q)(x,dz;u,v)=∫_u^vθ^(q+λ) (x,y;u,v)dy∫_ℝB_2^(q)(y+γ(y,w),d z;u,v)ν(dw),andC^(q,s)(x;u,v)=f_q+λ(x,v)/f_q+λ(u,v)+∫_u ^vθ^(q+λ)(x,y;u,v)dy∫_ℝC^(q,s) (y+γ(y,w);u,v)ν(dw).One can verify from Lemma <ref> that Assumptions (A2) and (A3) hold withb_2,a^(q)(x,dz)=∫_x-a^xθ̃_a^(q+λ )(x,y)dy∫_ℝB_2^(q)(y+γ(y,w),d z;x-a,x)ν(dw),andc_a^(q,s)(x)=-f_q+λ,1(x,x)/f_q+λ(x-a,x)+∫ _x-a^xθ̃^(q+λ)(x,y)dy∫_ℝ C^(q,s)(y+γ(y,z);x-a,x)ν(dw).This completes the proof. 13pt99AIZ16Albrecher, H.; Ivanovs, J.; Zhou, X. Exit identities for Lévy processes observed at Poisson arrival times. Bernoulli 22 (2016), no. 3, 1364–1382.AAP04Asmussen, S.; Avram, F.; Pistorius, M. R. Russian and American put options under exponential phase-type Lévy models. Stochastic Process. Appl. 109 (2004), no. 1, 79–111.AKP04Avram, F.; Kyprianou, A. E.; Pistorius, M. R. Exit problems for spectrally negative Lévy processes and applications to (Canadized) Russian options. Ann. Appl. Probab. 14 (2004), no. 1, 215–238.APP07Avram, F.; Palmowski, Z.; Pistorius, M. R. On the optimal dividend problem for a spectrally negative Lévy process. Ann. Appl. Probab. 17 (2007), no. 1, 156–180.B09Baurdoux, E. J. Some excursion calculations for reflected Lévy processes. ALEA Lat. Am. J. Probab. Math. Stat. 6 (2009), 149–162.B99Berberian, S. K. Fundamentals of Real Analysis, Springer-Verlag, New York, 1999. CZH11Carr, P.; Zhang, H.; Hadjiliadis, O. Maximum drawdown insurance. Int. J. Theor. Appl. Finance 14 (2011), no. 8, 1195–1230.CO13Cherny, V.; Obloj, J. Portfolio optimisation under non-linear drawdown constraints in a semimartingale financial model. Finance Stoch. 17 (2013), no. 4, 771–800.DSY00Douady, R.; Shiryaev, A. N.; Yor, M. On probability characteristics of "downfalls" in a standard Brownian motion. Theory Probab. Appl. 44 (2000), no. 1, 29–38.FKZ14Feinberg, E. A.; Kasyanov, P. O.; Zadoianchuk N. V. Fatou's Lemma for weakly converging probabilities. Theory Probab. Appl. 4 (2014), no. 58, 683–689.GT01Gilbarg, D.; Trudinger, N. S. Elliptic partial differential equations of second order. Reprint of the 1998 edition. Springer-Verlag, Berlin, 2001.GP97Gjessing, H. K.; Paulsen, J. Present value distributions with applications to ruin theory and stochastic equations. Stochastic Process. Appl. 71 (1997), no. 1, 123–144.GZ93Grossman, S. J.; Zhou, Z. Optimal investment strategies for controlling drawdowns, Math. Finance 3 (1993), no. 3, 241–276.IP12Ivanovs, J.; Palmowski, Z. Occupation densities in solving exit problems for Markov additive processes and their reflections. Stochastic Process. Appl. 122 (2012), no. 9, 3342–3360.JJ07Jacobsen, M.; Jensen, A. T. Exit times for a class of piecewise exponential Markov processes with two-sided jumps. Stochastic Process. Appl. 117 (2007), no. 9, 1330–1356.K02Kallenberg, O. Foundations of modern probability. Second edition. Probability and its Applications. Springer-Verlag, New York, 2002.KS01Kella, O.; Stadje, W. On hitting times for compound Poisson dams with exponential jumps and linear release rate. J. Appl. Probab. 38 (2001), no. 3, 781–786.KKR12Kuznetsov, A.; Kyprianou, A. E.; Rivero, V. The theory of scale functions for spectrally negative Lévy processes. Lévy matters II, 97–186, Lecture Notes in Math., 2061, Springer, Heidelberg, 2012.K14Kyprianou, A. E. Fluctuations of Lévy processes with applications. Second edition. Springer, Heidelberg, 2014.KL10Kyprianou, A. E.; Loeffen, R. L. Refracted Lévy processes. Ann. Inst. Henri Poincaré Probab. Stat. 46 (2010), no. 1, 24–44.KP07Kyprianou, A. E.; Palmowski, Z. Distributional study of de Finetti's dividend problem for a general Lévy insurance risk process. J. Appl. Probab. 44 (2007), no. 2, 428–443. KZ09Kyprianou, A. E.; Zhou, X. General tax structures and the Lévy insurance risk model. J. Appl. Probab. 46 (2009), no. 4, 1146–1156.LLZ16Landriault, D.; Li, B.; Zhang, H. On magnitude, asymptotics and duration of drawdowns for Lévy models. Bernoulli (2016), forthcoming.L77Lehoczky, J. P. Formulas for stopped diffusion processes with stopping times based on the maximum. Ann. Probability 5 (1977), no. 4, 601–607.LTZ13Li, B.; Tang, Q.; Zhou, X. A time-homogeneous diffusion model with tax. J. Appl. Probab. 50 (2013), no. 1, 195–207.L08Loeffen, R. L. On optimality of the barrier strategy in de Finetti's dividend problem for spectrally negative Lévy processes. Ann. Appl. Probab. 18 (2008), no. 5, 1669–1680.MAPA04Magdon-Ismail, M.; Atiya, A. F.; Pratap, A.; Abu-Mostafa, Y. On the maximum drawdown of a Brownian motion. J. Appl. Probab. 41 (2004), no. 1, 147–161.MP12Mijatovic, A.; Pistorius, M. R. On the drawdown of completely asymmetric Lévy processes. Stochastic Process. Appl. 122 (2012), no. 11, 3812–3836.OS07Øksendal, B.; Sulem, A. Applied stochastic control of jump diffusions. Second edition. Universitext. Springer, Berlin, 2007.P04Pistorius, M. R. On exit and ergodicity of the spectrally one-sided Lévy process reflected at its infimum. J. Theoret. Probab. 17 (2004), no. 1, 183–220.PH09Poor, H. V.; Hadjiliadis, O. Quickest detection. Cambridge University Press, Cambridge, 2009.PVH09Pospisil, L.; Vecer, J.; Hadjiliadis, O. Formulas for stopped diffusion processes with stopping times based on drawdowns and drawups. Stochastic Process. Appl. 119 (2009), no. 8, 2563–2578.RW00Rogers, L. C. G.; Williams, D. Diffusions, Markov processes, and martingales: volume 1, foundations. Second edition. Cambridge University Press, Cambridge, 2000. SE11Schuhmacher, F.; Eling, M. Sufficient conditions for expected utility to imply drawdown-based performance rankings. Journal of Banking & Finance 35 (2011), 2311–2318.SS93Shepp, L.; Shiryaev, A. N. The Russian option: reduced regret. Ann. Appl. Probab. 3 (1993), no. 3, 631–640. T75Taylor, H. M. A stopped Brownian motion formula. Ann. Probab. 3 (1975), 234–246.TO76Tsurui, A.; Osaki, S. On a first-passage problem for a cumulative process with exponential decay. Stochastic Processes Appl. 4 (1976), no. 1, 79–88.W46Widder, D. V. The Laplace Transform. Princeton University Press, 1946. YWN04Yuen, K. C.; Wang, G.; Ng, K. W. Ruin probabilities for a risk process with stochastic return on investments. Stochastic Process. Appl. 110 (2004), no. 2, 259–274.Z15Zhang, H. Occupation time, drawdowns, and drawups for one-dimensional regular diffusion. Adv. in Appl. Probab. 47 (2015), no. 1, 210–230.ZH10Zhang, H.; Hadjiliadis, O. Drawdowns and rallies in a finite time-horizon. Drawdowns and rallies. Methodol. Comput. Appl. Probab. 12 (2010), no. 2, 293–308.ZLH13Zhang, H.; Leung, T.; Hadjiliadis, O. Stochastic modeling and fair valuation of drawdown insurance. Insurance Math. Econom. 53 (2013), no. 3, 840–850.Z07Zhou, X. Exit problems for spectrally negative Lévy processes reflected at either the supremum or the infimum. J. Appl. Probab. 44 (2007), no. 4, 1012–1030.
http://arxiv.org/abs/1702.07786v1
{ "authors": [ "David Landriault", "Bin Li", "Hongzhong Zhang" ], "categories": [ "q-fin.MF", "60G07, 60G40" ], "primary_category": "q-fin.MF", "published": "20170224222619", "title": "A Unified Approach for Drawdown (Drawup) of Time-Homogeneous Markov Processes" }
[]dote@post.kek.jp J-PARC Branch, KEK Theory Center, IPNS, KEK, 203-1, Shirakata, Tokai, Ibaraki, 319-1106, Japan KEK Theory Center, Institute of Particle and Nuclear Studies (IPNS), High Energy Accelerator Research Organization (KEK), 1-1 Oho, Tsukuba, Ibaraki, 305-0801, Japan Nihon University, College of Bioresource Sciences, Fujisawa 252-0880, Japan General Education, Faculty of Engineering, Osaka Institute of Technology, Osaka 535-8585, JapanWe have developed a fully coupled-channel complex scaling method (ccCSM) for the study of the most essential kaonic nucleus, “K^-pp," which is a resonant state of a K̅NN-πΣ N-πΛ N coupled-channel system based on a theoretical viewpoint. By employing the ccCSM and imposing the correct boundary condition of resonance, the coupled-channel problem is completely solved using a phenomenological energy-independent potential. As a result of the ccCSM calculation of “K^-pp," in which all three channels are treated explicitly, we have obtained three-body resonance as a Gamow state. The resonance pole indicates that the binding energy of “K^-pp" and the half value of its mesonic decay width are 51 MeV and 16 MeV, respectively. In the analysis of the resonant wave function obtained using the ccCSM, we clarify the spatial configuration and channel compositions of “K^-pp." Compared with past studies of single-channel calculations based on effective K̅N potentials, the current study provides a guideline for the determination of the K̅N energy to be used in effective potentials.24.10.Eq, 14.20.Gk, 31.15.ac, 13.75.Jz, 21.85.+d Fully coupled-channel complex scaling method for the K^-pp system Takayuki Myo December 30, 2023 =================================================================Introduction: Kaonic nuclei (nuclear systems with antikaons) are one of the important topics in hadron and strange nuclear physics because they exhibit several interesting properties that have never been observed in ordinary nuclei. The K̅N potential is considerably attractive, particularly in the isospin I=0 channel. It forms a quasi-bound state that corresponds to an excited hyperon, Λ(1405) <cit.>. Early studies on phenomenological K̅N potential showed that light kaonic nuclei shrink significantly to form dense states, and a few of them exhibit interesting structures because of strong K̅N attraction <cit.>. Therefore, kaonic nuclei are expected to be a doorway to dense matter, in which the partial restoration of chiral symmetry may occur <cit.>. To describe the nature of kaonic nuclei in detail, considerable effort has been made toward the study of the most essential kaonic nucleus, K^-pp, in theoretical and experimental aspects. In the theoretical aspect, as K^-pp is a three-body system, it has been investigated using various approaches. Typically,the variational approach or Faddeev-AGS approach have been applied using phenomenological K̅N potentials or chiral SU(3)-based K̅N potentials.As stated in Ref. <cit.>, all theoretical studies show that K^-pp can be bound and its binding energy should be less than 100 MeV. However, the binding energy of K^-pp depends strongly on the type of K̅N potential used in calculation: When energy-independent phenomenological K̅N potentials are employed, K^-pp is a relatively deeply-bound state <cit.>. On the contrary, it is a shallowly-bound state when energy-dependent chiral SU(3)-based potentials are employed <cit.>. In the experimental aspect, interesting results have been reported through the observation of a few signals, even though the signals have not yet been established as the K^-pp bound state <cit.>. It should be noted that J-PARC E15 group is now finalizing the analysis of the data acquired in their second run: an exclusive measurement of the ^3He(K^-, Λ p)n_missing reaction with high statistics <cit.>. It is expected that the new result will provide a conclusive answer for the existence of the K^-pp bound state through experimental observation.We have investigated K^-pp using a coupled-channel complex scaling method (ccCSM). According to the abovementioned theoretical studies, K^-pp should exist as a resonant state, and not as purely a bound state, between the πΣ N and K̅NN thresholds because its binding energy is less than 100 MeV. In addition, K^-pp is expected to be a coupled-channel system of the K̅NN, πΣ N, and πΛ N channels. This is similar to Λ(1405), which is reasonably understood as a K̅N-πΣ coupled-channel system. Therefore, we consider that the treatments of the resonance and coupled-channel problems are the key factors in the theoretical study of K^-pp. The ccCSM can handle both factors simultaneously because it is based on the complex scaling method (CSM), which has been applied successfully in multiple studies of the resonances of ordinary nuclei, particularly those of unstable nuclei <cit.>. In a previous study, we have studied K^-pp using a method based on the ccCSM, referred to as the ccCSM+Feshbach method <cit.>. In this method, the coupled-channel problem of K^-pp is reduced to a single-channel problem of K̅NN through the Feshbach projection, which is well realized on the CSM. Therefore, the advantage of the ccCSM+Feshbach method is that it reduces the computational cost. However, the dynamics of eliminated channels are lost in calculation, and it is impossible to obtain information about these channels from solutions. In other words, the nature of K^-pp cannot be determined completely. To develop a complete understanding of K^-pp, we have attempted to use the fully ccCSM calculation, in which all channels are treated explicitly. In this method, we can directly obtain a resonant wave function so that the information about every channel is investigated explicitly.Formalism: In this article, similarly to most earlier works, we consider the K^-pp system as a K̅NN-πΣ N-πΛ N coupled-channel system with the following quantum numbers: total spin parity J^π=0^-, total isospin T=1/2, and T_z=1/2. Hereafter, such a K^-pp state is symbolically denoted as “K^-pp" (K^-pp with double quotation marks). For the fully coupled-channel calculation, the “K^-pp" wave function is expanded in terms of a basis as follows:|“K^-pp"⟩=∑_ch=1^8 ∑_n=1^N C_n^(ch)F_n^(ch)(x_1, x_2)|S_B_1 B_2 (ch)=0 ⟩ | T=1/2, T_z=1/2;Isospin-Flavor_(ch)⟩. Basically, the structure of the wave function is the same as that used in the ccCSM+Feshbach calculation, in which only the K̅NN channel was directly considered. (See Eqs. (11) and (12) in Ref. <cit.>.) In the current study, the πΣ N, πΛ N and the K̅NN channels are treated explicitly. The 8 channels that can be coupled with the “K^-pp" are listed in Table <ref>. Channels ch=1 and 2 are the K̅NN channels, which are the same as those used in our previous study. Newly added channels, i.e., ch=3,4, 5, 6 and ch=7, 8 are the πΣ N and πΛ N channels, respectively. The coefficients, {C_n^(ch)}, are parameters to be determined by the diagonalization of the complex-scaled Hamiltonian matrix. Suffix n is for the basis functions that expand the spatial part of the “K^-pp" wave function, and it increases up to N in each channel. The spatial basis function, F_n^(ch)(x_1, x_2), is composed of a correlated Gaussian function, G_n^(ch)(x_1, x_2). It is projected onto a parity eigenstate for the baryon-baryon system in each channel as G_n^(ch)(x_1, x_2)±G_n^(ch)(-x_1, x_2). Here, Jacobi coordinates x_1 and x_2 are defined as x_1=r_B2-r_B1 and x_2=r_M-R_B1,B2. In other words, x_1 and x_2 correspond to the relative coordinate between two baryons (B_1 and B_2) and that between the meson (M) and center of mass of the baryons, respectively. For more details on the spatial part of the wave function, readers can refer to our previous paper <cit.>. The total spin of the baryons, S_B_1 B_2 (ch), is fixed at zero in all channels. The total isospin and its projection are assumed to be T=1/2 and T_z=1/2, respectively, and its structure is given in the last column of Table <ref>. Here, it should be noted that in the πΣ N and πΛ N channels, the isospin part of the wave function must be symmetrized or antisymmetrized for baryon labels (referred to as flavors) to satisfy the generalized Pauli principle. Hamiltonian Ĥ is composed of a mass term, M̂, a kinetic energy term, T̂,nucleon-nucleon (NN) potential, V̂_NN, and a meson-baryon potential, V̂^MB_αβ, for channels α and β, as follows: Ĥ = M̂+T̂ + V̂_NN + ∑_i=1,2∑_α,β=K̅N, πΣ, πΛV̂^MB_αβ (M,B_i).The kinetic energy term, T̂, is constructed for the Jacobi coordinates (x_1, x_2) in each channel, which is similar to our previous study <cit.>. The Argonne v18 potential <cit.> is employed as the NN potential. A phenomenological K̅N potential (referred to as the AY potential <cit.>) is employed as the K̅N potential in the meson-baryon potential. The AY potential is constructed for a K̅N-π Y coupled-channel space, and it is given in r-space local form with a Gaussian shape. Its parameters are constrained with low-energy K̅N scattering data and the pole position of Λ(1405). (Details are explained in Ref. <cit.>.)Note that the channel coupling of the AY potential is explicitly included in the Hamiltonian in Eq. (<ref>).In the present study, the YN and π N potentials are neglected because their contribution to the “K^-pp” energy is considered to be minor compared to that of the NN and K̅N potentials. We apply the CSM to obtain resonances directly using the wave function defined in Eq. (<ref>) <cit.>. In the CSM, all coordinates included in Hamiltonian Ĥ are complex-scaled as x_i →x_i e^i θ with a common scaling angle, θ. By diagonalizing complex-scaled Hamiltonian Ĥ^θ using the basis functions given in Eq. (<ref>), all eigenstates are obtained in discretized form. Among these states, the resonance states of “K^-pp" are associated with complex-energy eigenvalues, which are independent of scaling angle θ. For such complex-energy eigenvalues, each vector of complex coefficients {C_n^(ch)} represents the corresponding resonance state.We comment on the symmetry for the exchange of two baryons in “K^-pp." As shown in the last term of Eq. (<ref>), the meson-baryon potential is common for baryons B_1 and B_2 in all channels. In other words, the Hamiltonian is symmetric under the exchange of two baryons. On the other hand, the “K^-pp" wave function given as Eq. (<ref>) is antisymmetric for the baryon exchange in every channel, as explained above. Therefore, as mentioned in an early study on Faddeev calculation involving different kinds of baryons <cit.>, the wave function is antisymmetric for baryon exchange, whereas the Hamiltonian is symmetric.Result and discussion: Fig. <ref> shows distribution of the eigenvalues on the complex energy plane, obtained in the present ccCSM calculation by diagonalizing the complex-scaled Hamiltonian matrix. Here, scaling angle θ is set to be 30 degrees. We consider 20 Gaussian basis functions for individual Jacobi coordinates in each channel, and the total number of basis functions is 6400, in which all sets of Jacobi coordinates are considered. In the figure, the origin corresponds to the πΛ N threshold. Hereafter, to represent the complex energy of poles, we use “-B_K^-pp,” which denotes the real energy measured from the K̅NN threshold, and “-Γ_π YN/2,” which denotes the imaginary energy and corresponds to the half value of mesonic decay width with a minus sign. In the CSM, the eigenvalues of continuum states appear along a 2θ line, whose slope angle (-2θ) depends on scaling angle θ <cit.>. In Fig. <ref>, we clearly obtain the πΛ N, πΣ N, and K̅NN three-body continuum states along three 2θ lines starting from the individual energy thresholds. In addition, there is a group of eigenvalues along a line starting from a complex eigenvalue of (-B_K^-pp, -Γ_π YN/2)=(-28, -20) MeV (blue dashed line in Fig. <ref>).The complex energy at the starting point of this line (marked with a blue circle) coincides with the pole position of Λ(1405) calculated using the AY potential <cit.>. Therefore, this group of the eigenvalues represent the Λ(1405)-N quasi two-body continuum state. Finally, we determine an eigenvalue that exists in addition to all lines mentioned above. This eigenvalue corresponds to the three-body resonance pole of the “K^-pp" system. Based on this result, the binding energy and half decay width of the “K^-pp" resonant state are determined to be B_K^-pp=51 MeVandΓ_π YN/2=16 MeV,respectively, when the Argonne v18 NN potential and AY K̅N potential are employed. We have confirmed that the pole positions of the “K^-pp" system and Λ(1405) are considerably stable when scaling angle θ and the parameters of the spatial part of basis functions { G_n^(ch)(x_1, x_2) } change. It is noted that the eigenstates shown in Fig. <ref> construct the completeness relation consisting of the three-body resonance and the two- and three-body continuum-states of each channel including Λ(1405) <cit.>. This completeness relation is a unique characteristic of the ccCSM, and it is useful for applying the present wave function to spectrum calculation <cit.>. Table <ref> shows the properties of the K^-pp resonant state, which are determined using the ccCSM wave function. The norm of each component is given in the left column. Although the quantities for resonant states are obtained to be complex values in the CSM, the magnitude of each norm indicates that the K̅NN component is apparently dominant and that the πΣ N component is significantly involved in the K^-pp resonance. This is attributed to the nature of the AY potential, in which the K̅N-πΣ coupling potential is considerably strong, particularly in the I=0 channel <cit.>. A similar value of the πΣ component is observed in Λ(1405), which is an I=0 K̅N-πΣ coupled-channel system, for which N(K̅N)=1.118-i0.107 and N(πΣ)=-0.118+i0.107. The mean NN and K̅N distances in the K^-pp system are shown in the right column. The NN distance is calculated to be 1.86 fm. In nuclear matter with normal density (ρ_0=0.16 fm^-3), the mean distance between two nucleons is estimated to be 2.2 fm. This implies that the NN distance in “K^-pp” decreases by 15% compared with that in normal nuclear matter. Therefore, the K^-pp system calculated using the AY potential could be a piece of dense matter <cit.>. (The density is estimated to be 1.7ρ_0 using the NN distance.) The K̅N mean distance for the I=0 component is apparently smaller than that for the I=1 component because the K̅N potential is considerably more attractive in the I=0 channel than it is in the I=1 channel. In addition, the I=0 K̅N distance in the K^-pp system is close to that in Λ(1405). (The K̅N distance in Λ(1405) is calculated to be 1.25 - i0.27 fm in the case of the AY potential.) Therefore, the Λ(1405) component is considered to survive in the K^-pp system, as suggested in earlier studies <cit.>. The binding energy and decay width shown in Eq. (<ref>) indicate the genuine pole position of the “K^-pp" resonance for the Hamiltonian (<ref>) with the AY potential because all channels are treated explicitly in the present ccCSM calculation. Therefore, it is interesting to compare the result of the present ccCSM calculation with those of earlier studies. In most previous studies <cit.> and in our study on the ccCSM+Feshbach method, three-body calculation is carried out in the scheme of the K̅NN channel using only effective K̅N potential, in which the dynamics of the π Y channels are incorporated indirectly. In these truncated calculations, the derived effective potential is energy dependent because of channel elimination, even though the original coupled-channel potential is energy independent. This energy dependence requires self-consistency for the K̅N energy while solving the Schrödinger equation. However, in principle, it is impossible to uniquely determine the K̅N energy in the K̅NN three-body system because subsystem energy is not an eigenvalue of the total Hamiltonian. To estimate the K̅N energy in the K̅NN system, two extreme ansatzes were proposed in a previous study, which were referred to as Field picture and Particle picture <cit.>. Using the binding energy, B_K, of an antikaon, the K̅N energy, E_KN, is estimated as E_KN=-B_K in Field picture and E_KN=-B_K/2 in Particle picture.(Details are explained in Ref. <cit.>.) According to the systematic study of light kaonic nuclei using the stochastic variational method with effective K̅N potentials <cit.>, the discrepancy among the results obtained by employing the two ansatzes increases with the mass of systems, particularly in the case of decay widths. Therefore, ambiguity exists in single-channel calculations performed using effective K̅N potential.Table <ref> shows the binding energy and half decay width of the “K^-pp" system, which are calculated using three methods based on the CSM with the AY potential (single/coupled channel version). The first row shows the result obtained using the CSM with the single-channel version of the AY potential, which is tuned to reproduce the Λ(1405) energy calculated using the coupled-channel version of the AY potential. The results shown in the second, third, and fourth rows are obtained using the ccCSM+Feshbach method, in which three ansatzes are employed for the estimation of the K̅N energy. In addition to the two previously mentioned ansatzes, another ansatz (“Λ^* fixed”) is examined, in which the K̅N energy is equal to that of Λ(1405). The last row shows the result of the present ccCSM calculation. Among these calculations, the CSM with the single-channel potential and the ccCSM+Feshbach method with the Λ^*-fixed ansatz are conceptually equivalent. Actually, their results are in good agreement with each other. Furthermore, the result obtained using ccCSM+Feshbach method with the Particle-picture ansatz is similar to these two results. Based on this, the Particle-picture ansatz is considered to be almost equivalent to the Λ^*-fixed ansatz. On the contrary, the result obtained using the Field-picture ansatz is different from those mentioned above, particularly for decay width. However, it is considerably close to the result of the present ccCSM calculation, which is a fully coupled-channel calculation. Based on these comparisons, it can be concluded that Field picture is better than Particle picture when effective K̅N potential is used. Needless to say, it is best to carry out a fully coupled-channel calculation, such as that performed in this study.Here, we comment further on the treatment of the K̅N energy. A more sophisticated ansatz has been proposed for it in an earlier study of K̅NN conducted using the hyperspherical harmonics approach <cit.>. This ansatz is considered to be an improved version of Particle picture. The K̅N energy is estimated as E_KN=-B_K/2-Δ, in which Δ represents a correction. When this improved ansatz is employed in the ccCSM+Feshbach method,the result is almost equal to that obtained using Field picture <cit.>. Therefore, even if we employ Particle picture, considering its improvement, the result becomes equal to that obtained using Field picture. This fact supports Field picture rather than Particle picture, consistently with the conclusion described in the previous paragraph.Summary and future plan: We have developed a fully coupled-channel complex scaling method (ccCSM) for studying the most essential kaonic nucleus “K^-pp". Theoretically, “K^-pp" is regarded as a resonant state consisting of the K̅NN-πΣ N-πΛ N coupled-channel system. As the development of our previous study on the ccCSM+Feshbach method, the K̅NN, πΣ N, and πΛ N channels are treated explicitly in the present ccCSM calculation. As the first trial, we have employed a phenomenological K̅N potential (the AY potential). We have clearly obtained the “K^-pp" three-body resonance pole. The resonance pole indicates that the binding energy is 51 MeV and the half width of the mesonic decay mode is 16 MeV. The analysis of the ccCSM wave function shows that the “K^-pp" resonant state is apparently dominated by the K̅NN component and involves the πΣ N component significantly. The mean distance between two nucleons is smaller than the value of normal density by approximately 15%. Hence, we have confirmed that the “K^-pp" system is a piece of a dense matter in the case of the AY potential, as suggested by earlier studies. As compared to previous studies, the current fully coupled-channel calculation provides a guideline on using effective K̅N potentials. For the determination of the K̅N energy in the total system, which is necessary in single-channel calculations using effective K̅N potentials, two ansatzes have been proposed in a previous study <cit.>. The present fully ccCSM calculation supports the Field-picture ansatz rather than the Particle-picture ansatz. Here, we emphasize that the present ccCSM calculation is the first calculation in which the “K^-pp" three-body resonant wave function is obtained completely. This enables us to analyze the nature of the “K^-pp" resonant state in detail. The “K^-pp" resonance pole determined in the present calculation should be the genuine pole for the Hamiltonian (<ref>) involvingthe AY potential. As the AY potential is energy independent, self-consistency for the K̅N energy is not necessary when all channels are treated explicitly. In the present case, by diagonalization of the complex-scaled Hamiltonian matrix, the resonance pole should definitely be obtained. In future, we will perform calculations using the fully ccCSM with chiral SU(3)-based K̅N potential. This is an energy-dependent potential, which is in contrast to the AY potential. Many studies based on chiral dynamics have suggested that Λ(1405) exhibits a double-pole structure owing to the energy dependence of the potential <cit.>. In addition, in our previous study on the ccCSM+Feshbach method <cit.> and in an earlier study on the Faddeev-AGS approach <cit.>, it has been suggested that the K^-pp system might exhibit a double-pole structure similar to Λ(1405). As discussed in Ref. <cit.>, based on the ccCSM+Feshbach calculation, such double poles of the K^-pp system are related to the experimental results mentioned at the beginning of this article: DISTO <cit.> and J-PARC E27 <cit.> collaborations observed a signal close to the πΣ N threshold, while J-PARC E15 <cit.> collaboration observed a signal close to the K̅NN threshold. If these signals are regardedas the K^-pp bound state, the first two results indicate a deeply bound K^-pp state, while the third indicates a shallowly bound K^-pp state. One should recall the second run of the J-PARC E15 experiment <cit.>. New data, which are currently being analyzed, would provide a definite conclusion about the K^-pp bound state, as an exclusive measurement has been carried out with high statistics.When all channels are treated explicitly in the fully ccCSM calculation, which is in contrast to our previous study <cit.>, the difference between their nature can be determined, particularly the difference between their compositions. Such detailed information about K^-pp will provide insights toward the understanding of the experimental results. Furthermore, two-nucleon absorption (K̅NN→ YN), which is not accounted for in the current study, should be considered because it results in a large width of the non-mesonic decay mode <cit.>. Spectrum calculation is important for direct comparison with experimental results; this calculation is performed in Ref. <cit.>. In addition, such a study can be conducted using the fully ccCSM calculation with the completeness relation <cit.>. We expect that the fully ccCSM calculation with chiral SU(3)-based and AY K̅N potentials and experimental data will provide a conclusive answer to this longstanding issue of the most essential kaonic nucleus, K^-pp.Acknowledgments: One of the authors (A. D.) thanks Prof. T. Harada and Prof. H. Horiuchi for productive discussion on the treatment of the coupled-channel problem and Prof. Y. Akaishi for his helpful advice. This work is supported by JSPS KAKENHI Grant Number 25400286 and partially by Grant Numbers 24105008, 15K05091, and 26400281. The calculation in this study was performed using High Performance Computing system (miho) at Research Center for Nuclear Physics (RCNP) in Osaka University.99 ChU:Review T. Hyodo and D. Jido, Prog. Part. Nucl. Phys. 67, 55 (2012).AY_2002 Y. Akaishi and T. Yamazaki,Phys. Rev. C 65, 044005 (2002); T. Yamazaki and Y. Akaishi,Phys. Lett. B 535, 70 (2002). AMDK A. Doté, H. Horiuchi, Y. Akaishiand T. Yamazaki, Phys. Lett. B 590, 51 (2004); Phys. Rev. C 70, 044313 (2004). ChSymRes:Hatsuda T. Hatsuda and T. Kunihiro, Phys. Rev. Lett. 55, 158 (1985); Phys. Rep. 247, 221 (1994).ChSymRes:Weise W. Weise, Nucl. Phys. A 553, 59 (1993).StrangenessSummary_2016 A. Gal, E. V. Hungerford, D. J. Millener, Rev. Mod. Phys. 88, 035004 (2016).Kpp:AY T. Yamazaki and Y. Akaishi, Phys. Rev. C 76, 045201 (2007).Faddeev:ShevchenkoN. V. Shevchenko, A. Gal, J. Mares, and J. Révai,Phys. Rev. C 76, 044004 (2007).Kpp:DHW A. Doté, T. Hyodo and W. Weise, Nucl. Phys. A 804, 197 (2008); Phys. Rev. C 79, 014003 (2009). Kpp:IKS Y. Ikeda, H. Kamano and T. Sato, Prog. Theor. Phys. 124, 533 (2010).Kpp:BGL N. Barnea, A. Gal and E. Z. Liverts, Phys. Lett. B, 712 132 (2012).Kpp:exp_DISTOT. Yamazaki et al. (DISTO collaboration), Phys. Rev. Lett. 104, 132502 (2010). Kpp-ex:JPARC-E27 Y. Ichikawa et al. (J-PARC E27 collaboration),Prog. Theor. Exp. Phys. 2015, 021D01 (2015).Kpp-ex:JPARC-E15 Y. Sada et al. (J-PARC E15 collaboration), Prog. Theor. Exp. Phys. 2016, 051D01 (2016). Kpp-ex:JPARC-E15-2nd F. Sakuma (J-PARC E15 collaboration), JPS Conf. Proc. 13, 010002 (2017). (Proceedings of The 14th International Conference on Meson-Nucleon Physics and the Structure of the Nucleon (MENU2016))CSM:Myo2 T. Myo, Y. Kikuchi, H. Masui and K. Kato, Prog. Part. Nucl. Phys. 79, 1 (2014).ccCSM+F:Dote A. Doté, T. Inoue and T. Myo, Prog. Theor. Exp. Phys. 2015, 043D02 (2015).Av18 R. B. Wiringa, V. G. J. Stoks and R. Schiavilla, Phys. Rev. C 51, 38 (1995).Faddeev:Miyagawa W. Glöckle and K. Miyagawa, Few-Body Syst. 30, 241 (2001).SVM:S-Ohnishi S. Ohnishi, W. Horiuchi, T. Hoshino, K. Miyahara and T. Hyodo, arXiv:1701.07589 [nucl-th]. ccCSM+F:Dote(MENU2016) A. Doté, JPS Conf. Proc. 13, 020001 (2017). (Proceedings of The 14th International Conference on Meson-Nucleon Physics and the Structure of the Nucleon (MENU2016))ccCSM+F:Dote(HYP2015) A. Doté, T. Inoue and T. Myo, Proceedings of 12th International Conference on Hypernuclear and Strange Particle Physics (HYP2015), to appear in JPS Conf. Proc..2Nabs:Bayar-Oset M. Bayar and E. Oset, Phys. Rev. C 88, 044003 (2013); Nucl. Phys. A 914, 349 (2013).JPARC-E15:Sekihara-Oset-Ramos T. Sekihara, E. Oset and A. Ramos, Prog. Theor. Exp. Phys. 2016, 123D03 (2016).
http://arxiv.org/abs/1702.08002v3
{ "authors": [ "Akinobu Doté", "Takashi Inoue", "Takayuki Myo" ], "categories": [ "nucl-th", "nucl-ex" ], "primary_category": "nucl-th", "published": "20170226084842", "title": "Fully coupled-channel complex scaling method for the $K^-pp$ system" }
[Current address: ]Center for Nanophotonics, AMOLF, Science Park 104, 1098 XG Amsterdam, The Netherlands muhonen@amolf.nl Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia [Current address: ]QuTech & Kavli Institute of Nanoscience, TU Delft, 2628 CJ Delft, The Netherlands Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia [Current address: ]Department of Physics, Simon Fraser University, Burnaby BC V5A 1S6, Canada Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia [Current address: ]School of Mathematics & Physics, University of Queensland, Brisbane QLD 4072, Australia. Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia Centre for Quantum Computation and Communication Technology, School of Physics, University of Melbourne, Melbourne VIC 3010, Australia Centre for Quantum Computation and Communication Technology, School of Physics, University of Melbourne, Melbourne VIC 3010, Australia School of Fundamental Science and Technology, Keio University, 3-14-1 Hiyoshi, 223-8522, Japan Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia a.morello@unsw.edu.au Centre for Quantum Computation and Communication Technology, School of Electrical Engineering and Telecommunications, UNSW Australia, Sydney NSW 2052, Australia The understanding of weak measurements and interaction-free measurements has greatly expanded the conceptual and experimental toolbox to explore the quantum world. Here we demonstrate single-shot variable-strength weak measurements of the electron and the nuclear spin states of a single ^31P donor in silicon. We first show how the partial collapse of the nuclear spin due to measurement can be used to coherently rotate the spin to a desired pure state. We explicitly demonstrate that phase coherence is preserved throughout multiple sequential single-shot weak measurements, and that the partial state collapse can be reversed. Second, we use the relation between measurement strength and perturbation of the nuclear state as a physical meter to extract the tunneling rates between the ^31P donor and a nearby electron reservoir from data, conditioned on observing no tunneling events. Our experiments open avenues to measurement-based state preparation, steering and feedback protocols for spin systems in the solid state, and highlight the fundamental connection between information gain and state modification in quantum mechanics.Coherent control via weak measurements in ^31P single-atom electron and nuclear spin qubits A. Morello December 30, 2023 ===========================================================================================The quantum measurement postulate, as found in quantum mechanics textbooks, implicitly describes projective (von Neumann) measurements, where a measurement apparatus is coupled to a quantum system and, upon performing the measurement, returns a unique value a_k for some observable  of the quantum system. If the system was initially in the state |ψ⟩, the act of measurement leaves it in the state |ϕ_k⟩, the eigenstate of the observable  with eigenvalue a_k. The non-deterministic and non-unitary process through which the act of measurement transforms the initial state |ψ⟩ into the final state |ϕ_n⟩ is known as ”wavefunction collapse”, and has been the subject of a century of debate and controversy.However, as was already appreciated by von Neumann <cit.>, the projective measurement is only a limiting case. One can also have a detector which is only partially correlated with some observable of the quantum system, and therefore returns only partial information on the system state. Accordingly, the wavefunction needs not be fully projected onto an eigenstate, but is only weakly disturbed by the measurement process. The implications and applications of such “weak measurements” and corresponding partial collapse of the quantum state have gained considerable attention, especially in the context of quantum information processing. Recent experiments on superconducting qubits have demonstrated partial wavefunction collapse <cit.>, measurement reversal <cit.>, stabilized Rabi oscillations using quantum feedback <cit.>, direct observation of quantum trajectories <cit.>, reduction of decoherence via “uncollapsing” <cit.>, and observation of the back-action steering from a variable strength measurement <cit.>.In this Letter, we describe how to apply the principles of weak quantum measurements to the electron and nuclear spin states of an individual ^31P donor atom in silicon. In the context of quantum measurement, the ^31P atom provides access to many key features, in particular related to negative-result measurements <cit.> and quantum steering <cit.>. In particular, we show that weak measurements can be used to phase coherently control the state of the ^31P nuclear spin, and that it is possible to preserve phase coherence through the sequential measurement and control steps. This aspect of weak measurements has not been explicitly clarified in the recent literature, and opens avenues to measurement based state preparation, EPR steering and possible feedback protocols in these systems. As a further demonstration of the applicability of the weak measurement toolbox to the ^31P system, we show how the tunneling rate of the electron to a nearby electron reservoir can be extracted from a dataset conditioned on having no tunneling events, in a spirit similar to the Elitzur-Vaidman bomb <cit.>.Figure <ref>(a) shows a scanning electron microscope image of our device, which is fabricated on an isotopically enriched ^28Si substrate <cit.>, and where the ^31P atom is introduced via single-ion implantation <cit.>. This system has gained considerable attention in the field of solid-state quantum information processing, since it contains two natural qubits (the electron spin, with S=1/2 and basis states |↑⟩, |↓⟩, and the ^31P nucleus, with spin I=1/2 and basis states |⇑⟩, |⇓⟩) that exhibit extremely long coherence times <cit.>, high quantum gate fidelities <cit.> and can be efficiently entangled with each other <cit.>. At its core, the quantum state of the ^31P system is accessible through the measurement of the z-projection of the electron spin, where z is the axis along which a strong external magnetic field B_0 (≈ 1.5 T in the present experiment) is applied. The donor is placed in close proximity (≈ 25 nm <cit.>) to a cold (T ≈ 100 mK) electron reservoir. Under suitable biasing conditions, the donor-bound electron can tunnel into the cold reservoir if and only if it is in the excited |↑⟩ state. The positively charged donor left behind after this tunneling event shifts the bias point of a nearby single-electron transistor (SET) and switches it to a high conductance state. Conversely, a |↓⟩ electron cannot escape the donor, leaving the SET in a near-zero conductance state. This spin-dependent tunneling process <cit.> thus gives rise to a single-shot measurement, with fidelity in excess of 90% <cit.>. This mechanism provides a near-ideal negative-result measurement for the |↓⟩ state, which is identified by the absence of a signal in the SET current. The ^31P nuclear spin couples to the electron through the Fermi contact hyperfine interaction A 𝐈·𝐒, with A ≈ 97 MHz in this specific device. As a consequence, the electron spin can have two possible resonance frequencies, ν_ e1,2 = γ_ eB_0 ∓ A/2 [Fig. <ref>(c)], where γ_ e≈ 28 GHz/T is the electron gyromagnetic ratio. Single-shot nuclear readout <cit.> is obtained by initializing the |↓⟩ state and applying a microwave π-pulse at e.g. ν_ e1, where subsequently measuring the electron |↑⟩ state indicates that the nuclear spin state was |⇓⟩. Since we work in the limit γ_ eB_0 ≫ A, the hyperfine interaction can be approximated with A I_z S_z, and therefore commutes with the S_z electron spin observable. This means that the readout of the z-projection of the nuclear spin is of quantum nondemolition type <cit.>, and can be repeated to achieve a readout fidelity approaching 99.9% <cit.>, well beyond that of a single-shot electron readout. The use of an electron π-pulse is just a limiting case, where one gains maximum information about the nuclear spin state. Here instead we explore the more general case where the electron rotation angle is θ≠π <cit.>, which causes the subsequent electron readout to provide only partial information on the nuclear state. This realizes a tunable weak measurement, with strength controlled by the electron rotation angle θ. We show below that, as a result of a weak nuclear measurement conditioned on measuring electron |↓⟩, the nuclear state can be coherently rotated to an arbitrary pure state. This could be extended to provide an interesting implementation of EPR steering <cit.> with spins in the solid state, by applying ESR pulses simultaneously on both ν_ e1 and ν_ e2 <cit.>. Our experiments were conducted by exciting only one ESR frequency at a time, and therefore we will refrain from using the term “steering” to describe the process.Let us assume that the nuclear spin is initially in the state |ψ_ n0⟩ = (|⇓⟩ + |⇑⟩)/√(2), while the electron spin is initialized in its ground state |↓⟩. We then apply a microwave pulse at frequency ν_ e2 to produce a rotation by an angle θ of the electron spin, conditioned on the nuclear spin being in the |⇑⟩ state. The full electron-nuclear state then becomes |Ψ_ en⟩ = [ |⇓↓⟩ + cos(θ/2) |⇑↓⟩ + sin(θ/2) |⇑↑⟩]/√(2). A readout of the electron spin state will then produce |↑⟩ with probability P_↑= sin^2(θ/2)/2 and leave the nuclear spin state |⇑⟩. More interestingly, with probability P_↓ = [1+cos^2(θ/2)]/2 the electron readout will produce |↓⟩ and leave the nuclear spin in a coherent superposition state |ψ_ n⟩ = [|⇑⟩+cos(θ/2)|⇓⟩]/[1+cos^2(θ/2)], which has therefore been rotated from the original state |ψ_ n0⟩ using only electron spin resonance (ESR) pulses and electron spin measurements. Importantly, as we show below, this rotation is fully coherent and can be used the prepare any nuclear spin superposition state. The rotation is probabilistic in the sense that it can fail (if the outcome of the electron readout is |↑⟩), but in case of a success (heralded by the |↓⟩ electron readout) the steered state is fully deterministic.A more complete description of the process is obtained through a density matrix formalism <cit.>. The initial nuclear spin state isρ_0 = |ψ_ n0⟩⟨ψ_ n0| = 1/2[ 1 1; 1 1 ].After the θ rotation of the electron spin (initially |↓⟩) conditioned on the |⇑⟩ nuclear state, and a |↓⟩ electron readout, the nuclear spin is left in the stateρ(θ) = 1/1+cos^2(θ/2)[ cos^2(θ/2)cos (θ/2);cos (θ/2)1 ],which notably is a pure state for all values of θ. This readily generalizes to multiple electron rotation and measurement steps. For example, after two sequential applications of the sequence, the nuclear spin state is (conditional on reading |↓⟩ at both steps)ρ(θ_1,θ_2)= 1/1+cos^2(θ_1/2)cos^2(θ_2/2)× [ cos^2(θ_1/2)cos^2(θ_2/2) cos (θ_1/2)cos (θ_2/2);cos (θ_1/2) cos (θ_2/2)1 ],assuming phase coherence is preserved at the intermediate electron readout step (see below).An interesting scenario appears if the second electron rotation is applied at ν_ e1 instead of ν_ e2, so that the rotation is conditioned on the nuclear |⇓⟩ state. Calling ϕ the rotation angle of the microwave pulse at ν_ e1, the final state becomesρ(θ,ϕ)= 1/cos^2(ϕ/2)+cos^2(θ/2)× [cos^2(θ/2)cos (θ/2)cos (ϕ/2); cos (θ/2) cos (ϕ/2)cos^2(ϕ/2) ].If we set ϕ = θ, the final state is ρ(θ,θ) = ρ_0. This is known as “measurement reversal” <cit.>: the second weak measurement of the nuclear spin erases the effect of the first one. Figure <ref> shows experimental data obtained with full quantum state tomography, i.e. measurement of all three nuclear spin components σ_z = (ρ_1,1-ρ_2,2), σ_x = (ρ_1,2+ρ_2,1), and σ_y = (ρ_1,2-ρ_2,1). The left column of Fig. <ref> is the result of a single nuclear rotation step, consisting of an ESR pulse at ν_ e2 inducing a rotation of angle θ around the x-axis to the electron spin state, followed by single-shot electron readout, and postselection on the |↓⟩ outcome. The dashed lines, in excellent agreement with the data, show the expected nuclear state, on the basis of the density matrix description presented above, without any free fitting parameters.The middle column in Fig. <ref> illustrates the application of two sequential rotation steps, conducted for simplicity with the same ESR rotation angle θ on ν_ e2 at both steps. The fact that the data (especially the σ_x-component) follows the theoretical predictions indicates that the nuclear state remains coherent throughout the sequence, which contains two weak nuclear measurements. In other words, the partial collapse of the nuclear state after the first weak measurement is a phase coherent, predictable process, although the evolution is non-unitary. A minimum requirement for observing this effect is that the dephasing time of the nuclear spin qubit has to be longer than the electron readout time. The ^31P nuclear spin qubit in ^28Si already has an intrinsically long dephasing time (T_2^∗≈ 0.5 ms <cit.>), but here we further extend it by applying two NMR refocusing pulses during the 3 ms electron readout step (see Fig. 1(d)). We also frequency-modulate the NMR source to track the resonance frequency of the nuclear spin qubit during the electron readout phase, since the change in the donor electrostatic potential under readout conditions causes a Stark shift of the resonance frequency <cit.>. On the right column of Fig. <ref> we present the so-called measurement reversal <cit.>, which requires a rotation by θ on ν_ e2 and rotation by ϕ = θ on ν_ e1. As predicted, we recover the original state each time (again, conditional on obtaining |↓⟩ at each electron readout step). Note that when θ = π, the nuclear measurement becomes fully projective and the probability of a successful reversal becomes zero (all success probabilities are presented in supplementary Fig. 1). The data points around θ = π are thus only statistical fluctuations. We now explore the possibility of performing a weak electron spin measurement, and the effects that such a measurement has on the nuclear spin. The spin-dependent tunneling mechanism that provides a discrimination between the |↑⟩ and |↓⟩ states yields a fully projective measurement only in the limit Γ_↑,out t_ m→∞, where t_ m is the measurement time and Γ_↑,out is the tunnel-out rate for a |↑⟩ electron, defined such that the probability for a |↑⟩ electron to have tunnelled out of the donor after time t_ m is P_↑,out(t_ m) = 1 - exp(-Γ_↑,out t_ m). For a finite value of Γ_↑,out t_ m, the absence of a tunnel-out event constitutes only a weak |↓⟩ measurement. The effect on the nuclear spin of a weak electron measurement can be captured quantitatively in the density matrix formalism, by modifying Eq. <ref> to include the probability 1-P_↑,out(t_ m) that an |↑⟩ does not tunnel out within the measurement time <cit.>. ρ(θ,t_ m)= 1/1+cos^2(θ/2)+(1-P_↑,out(t_ m))sin^2(θ/2)× [ cos^2(θ/2)+(1-P_↑,out(t_ m))sin^2(θ/2)cos (θ/2);cos (θ/2)1 ]. Hence, the expectation value of σ_z as a function of measurement time, conditioned on measuring |↓⟩ (no tunneling) is ⟨σ_z (t_ m) ⟩= cos^2(θ/2)+exp(-Γ_↑,out t_ m)sin^2(θ/2)-1/cos^2(θ/2)+exp(-Γ_↑,out t_ m)sin^2(θ/2)+1,which for θ=π reduces to a particularly simple form⟨σ_z (t_ m) ⟩= exp(-Γ_↑,out t_ m)-1/exp(-Γ_↑,out t_ m)+1.Solving for Γ_↑,out as a function of ⟨σ_z (t_ m)⟩ we find1/Γ_↑,out = -t_ m/ln(1+⟨σ_z (t_ m)⟩/1-⟨σ_z (t_ m)⟩). In Fig. <ref> we show the results of an experiment where we perform the above mentioned protocol, i.e., we prepare the nucleus in |ψ_ n0⟩ = (|⇓⟩ + |⇑⟩)/√(2), the electron in |↓⟩ and then apply an electron π-pulse at ν_ e1, thus leaving the electron-nuclear system in the Bell state <cit.> |Φ^+⟩ = (|↓⇓⟩ + |↑⇑⟩)/√(2). We then bring the electron towards the readout position for a time t_ m = 1.5 ms and, conditional on having no tunneling events, we subsequently measure the nuclear polarization ⟨σ_z ⟩. The experiment is repeated at different values of the gate voltage V_ DG, which controls the donor electrochemical potential μ_ D relative to the Fermi level of the electron reservoir <cit.>, and thereby tunes the donor-reservoir tunnel rate Γ_↑,out. For V_ DG≳ 0.2 V the |↑⟩ state is well below the Fermi level and neither the |↓⟩ nor the |↑⟩ states have a significant probability of tunneling out, i.e. the measurement strength vanishes: the absence of a tunneling event does not imply a |↓⟩ state. Accordingly, we find ⟨σ_z ⟩≈ 0 in that limit, i.e. the nuclear polarization has not been perturbed from the initial value. For V_ DG < 0.2 V, 1/Γ_↑,out becomes shorter and ⟨σ_z ⟩ veers towards negative values, which indicates that the electron |↓⟩ measurement is becoming stronger, thus turning the initial |Φ^+⟩ Bell state towards |↓⇓⟩. Using Eq. <ref> we can extract the numerical value of 1/Γ_↑,out, and compare it [Fig. <ref>(b)] to the tunnel time extracted directly from tunneling probabilities. The two methods agree almost perfectly, confirming the validity of our approach. The non-monotonic behavior of Γ_↑,out(V_ DG) is related to modulations in the density of states of the electron reservoir <cit.>.Unlike the weak nuclear measurement describe earlier, this process using weak electron measurement does not preserve the purity of the nuclear spin state. Also, the use of a maximally entangled |Φ^+⟩ Bell state as the starting point of the sequence is inconsequential for this particular experiment – the same result would be obtained starting from an incoherent mixture of |↓⇓⟩ and |↑⇑⟩, though the perfect correlation between the two spins is obviously required. Nonetheless, the process provides a curious example of interaction-free measurement <cit.> in the solid state.In conclusion, we have shown the application of several concepts and tools of weak single-shot measurements to a model solid-state spin system. In particular, we have demonstrated the ability to coherently control a nuclear spin using only ESR pulses and electron spin readout, and we have shown how to measure tunnel rates without any tunneling events. In the future, these techniques can be applied to a variety of interesting problems, such as the study of qubit dynamics under driving and weak measurement <cit.>, past quantum states of a monitored system <cit.> and the use of steering to improve qubit initialization. We thank K. Mølmer and R. Ruskov for insightful comments. This research was funded by the Australian Research Council through a Discovery Project (DP150101863) and the Centre of Excellence Quantum Computation and Communication Technology (CE11E0001027), the US Army Research Office (W911NF-13-1-0024) and the Commonwealth Bank of Australia. We acknowledge support from the Australian National Fabrication Facility, and from the laboratory of Prof Robert Elliman at the Australian National University for the ion implantation facilities. The work at Keio has been supported in part by KAKENHI (S) No. 26220602, Core-to-Core Program by JSPS, and Spintronics Research Network of Japan. 35 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[von Neumann(1932)]Neumann1932 author author J. von Neumann, @nooptitle Mathematische Grundlagen der Quantenmechanik (publisher Springer, address Berlin, year 1932)NoStop [Katz et al.(2006)Katz, Ansmann, Bialczak, Lucero, McDermott, Neeley, Steffen, Weig, Cleland, Martinis,and Korotkov]Katz2006 author author N. Katz, author M. Ansmann, author R. C. Bialczak, author E. Lucero, author R. McDermott, author M. Neeley, author M. Steffen, author E. M.Weig, author A. N. Cleland, author J. M. Martinis,and author A. N. Korotkov, http://www.sciencemag.org/content/312/5779/1498.abstract journal journal Science volume 312, pages 1498 (year 2006)NoStop [Katz et al.(2008)Katz, Neeley, Ansmann, Bialczak, Hofheinz, Lucero, O’Connell, Wang, Cleland, Martinis, and Korotkov]Katz2008 author author N. Katz, author M. Neeley, author M. Ansmann, author R. C. Bialczak, author M. Hofheinz, author E. Lucero, author A. O’Connell, author H. Wang, author A. N.Cleland, author J. M.Martinis,and author A. N.Korotkov, http://link.aps.org/doi/10.1103/PhysRevLett.101.200401 journal journal Phys. Rev. Lett. volume 101, pages 200401 (year 2008)NoStop [Vijay et al.(2012)Vijay, Macklin, Slichter, Weber, Murch, Naik, Korotkov, andSiddiqi]Vijay2012 author author R. Vijay, author C. Macklin, author D. H. Slichter, author S. J. Weber, author K. W. Murch, author R. Naik, author A. N. Korotkov,and author I. Siddiqi, http://dx.doi.org/10.1038/nature11505 journal journal Nature volume 490, pages 77 (year 2012)NoStop [Murch et al.(2013)Murch, Weber, Macklin, and Siddiqi]Murch2013 author author K. W. Murch, author S. J. Weber, author C. Macklin,andauthor I. Siddiqi, http://dx.doi.org/10.1038/nature12539 journal journal Nature volume 502, pages 211 (year 2013)NoStop [Weber et al.(2014)Weber, Chantasri, Dressel, Jordan, Murch, and Siddiqi]Weber2014 author author S. J. Weber, author A. Chantasri, author J. Dressel, author A. N. Jordan, author K. W. Murch,and author I. Siddiqi, http://dx.doi.org/10.1038/nature13559 journal journal Nature volume 511, pages 570 (year 2014)NoStop [Zhong et al.(2014)Zhong, Wang, Martinis, Cleland, Korotkov, and Wang]Zhong2014 author author Y. P. Zhong, author Z. L. Wang, author J. M. Martinis, author A. N. Cleland, author A. N. Korotkov,and author H. Wang, http://dx.doi.org/10.1038/ncomms4135 journal journal Nat. Commun. volume 5,(year 2014)NoStop [Hatridge et al.(2013)Hatridge, Shankar, Mirrahimi, Schackert, Geerlings, Brecht, Sliwa, Abdo, Frunzio, Girvin, Schoelkopf, and Devoret]Hatridge2013 author author M. Hatridge, author S. Shankar, author M. Mirrahimi, author F. Schackert, author K. Geerlings, author T. Brecht, author K. M. Sliwa, author B. Abdo, author L. Frunzio, author S. M. Girvin, author R. J. Schoelkopf,andauthor M. H. Devoret, http://science.sciencemag.org/content/339/6116/178.abstract journal journal Science volume 339, pages 178 (year 2013)NoStop [Groen et al.(2013)Groen, Ristè, Tornberg, Cramer, de Groot, Picot, Johansson,and DiCarlo]Groen2013 author author J. P. Groen, author D. Ristè, author L. Tornberg, author J. Cramer, author P. C. de Groot, author T. Picot, author G. Johansson,and author L. DiCarlo, http://link.aps.org/doi/10.1103/PhysRevLett.111.090506 journal journal Phys. Rev. Lett. volume 111, pages 090506 (year 2013)NoStop [Dicke(1981)]Dicke1981 author author R. H. Dicke, http://aapt.scitation.org/doi/10.1119/1.12592 journal journal Am. J. Phys volume 49, pages 925 (year 1981)NoStop [Wiseman et al.(2007)Wiseman, Jones, and Doherty]Wiseman2007 author author H. M. Wiseman, author S. J. Jones,and author A. C. Doherty,https://doi.org/10.1103/PhysRevLett.98.140402 journal journal Phys. Rev. Lett. volume 98,pages 140402 (year 2007)NoStop [Cavalcanti and Skrzypczyk(2017)]Cavalcanti2016 author author D. Cavalcanti and author P. Skrzypczyk, http://stacks.iop.org/0034-4885/80/i=2/a=024001 journal journal Reports on Progress in Physicsvolume 80, pages 024001 (year 2017)NoStop [Elitzur and Vaidman(1993)]Elitzur1993 author author A. C. Elitzur and author L. Vaidman, doi:10.1007/BF00736012 journal journal Foundations of Physics volume 23,pages 987 (year 1993)NoStop [Itoh and Watanabe(2014)]Itoh2014 author author K. M. Itoh and author H. Watanabe, https://doi.org/10.1557/mrc.2014.32 journal journal MRS Communications volume 4, pages 143 (year 2014)NoStop [van Donkelaar et al.(2015)van Donkelaar, Yang, Alves, McCallum, Hougaard, Johnson, Hudson, Dzurak, Morello, Spemann, and Jamieson]Donkelaar2015 author author J. van Donkelaar, author C. Yang, author A. D. C. Alves, author J. C. McCallum, author C. Hougaard, author B. C. Johnson, author F. E. Hudson, author A. S. Dzurak, author A. Morello, author D. Spemann,and author D. N. Jamieson, http://stacks.iop.org/0953-8984/27/i=15/a=154204 journal journal Journal of Physics: Condensed Matter volume 27, pages 154204 (year 2015)NoStop [Tyryshkin et al.(2012)Tyryshkin, Tojo, Morton, Riemann, Abrosimov, Becker, Pohl, Schenkel, Thewalt, Itoh, and Lyon]Tyryshkin2012 author author A. M. Tyryshkin, author S. Tojo, author J. J. L. Morton, author H. Riemann, author N. V. Abrosimov, author P. Becker, author H.-J. Pohl, author T. Schenkel, author M. L. W.Thewalt, author K. M.Itoh,and author S. A.Lyon, http://dx.doi.org/10.1038/nmat3182 journal journal Nat. Mater. volume 11, pages 143 (year 2012)NoStop [Saeedi et al.(2013)Saeedi, Simmons, Salvail, Dluhy, Riemann, Abrosimov, Becker, Pohl, Morton, and Thewalt]Saeedi2013 author author K. Saeedi, author S. Simmons, author J. Z. Salvail, author P. Dluhy, author H. Riemann, author N. V. Abrosimov, author P. Becker, author H.-J.Pohl, author J. J. L.Morton,and author M. L. W.Thewalt, http://science.sciencemag.org/content/342/6160/830.abstract journal journal Science volume 342, pages 830 (year 2013)NoStop [Muhonen et al.(2014)Muhonen, Dehollain, Laucht, Hudson, Kalra, Sekiguchi, Itoh, Jamieson, McCallum, Dzurak, and Morello]Muhonen2014 author author J. T. Muhonen, author J. P. Dehollain, author A. Laucht, author F. E. Hudson, author R. Kalra, author T. Sekiguchi, author K. M. Itoh, author D. N. Jamieson, author J. C. McCallum, author A. S. Dzurak,and author A. Morello, http://dx.doi.org/10.1038/nnano.2014.211 journal journal Nat. Nanotech. volume 9, pages 986 (year 2014)NoStop [Muhonen et al.(2015)Muhonen, Laucht, Simmons, Dehollain, Kalra, Hudson, Freer, Itoh, Jamieson, McCallum, Dzurak, and Morello]Muhonen2015 author author J. T. Muhonen, author A. Laucht, author S. Simmons, author J. P. Dehollain, author R. Kalra, author F. E. Hudson, author S. Freer, author K. M.Itoh, author D. N. Jamieson, author J. C. McCallum, author A. S. Dzurak,and author A. Morello, http://stacks.iop.org/0953-8984/27/i=15/a=154205 journal journal Journal of Physics: Condensed Matter volume 27, pages 154205 (year 2015)NoStop [Dehollain et al.(2016a)Dehollain, Muhonen, Blume-Kohout, Rudinger, Gamble, Nielsen, Laucht, Simmons, Kalra, Dzurak, andMorello]Dehollain2016b author author J. P. Dehollain, author J. T. Muhonen, author R. Blume-Kohout, author K. M. Rudinger, author J. K. Gamble, author E. Nielsen, author A. Laucht, author S. Simmons, author R. Kalra, author A. S. Dzurak,and author A. Morello, http://iopscience.iop.org/article/10.1088/1367-2630/18/10/103018/meta journal journal New Journal of Physics volume 18, pages 103018 (year 2016a)NoStop [Simmons et al.(2011)Simmons, Brown, Riemann, Abrosimov, Becker, Pohl, Thewalt, Itoh, and Morton]Simmons2011 author author S. Simmons, author R. M. Brown, author H. Riemann, author N. V. Abrosimov, author P. Becker, author H.-J. Pohl, author M. L.Thewalt, author K. M.Itoh,and author J. J.Morton, http://www.nature.com/nature/journal/v470/n7332/full/nature09696.html journal journal Nature volume 470, pages 69 (year 2011)NoStop [Dehollain et al.(2016b)Dehollain, Simmons, Muhonen, Kalra, Laucht, Hudson, Itoh, Jamieson, McCallum, Dzurak, andMorello]Dehollain2016a author author J. P. Dehollain, author S. Simmons, author J. T. Muhonen, author R. Kalra, author A. Laucht, author F. Hudson, author K. M.Itoh, author D. N. Jamieson, author J. C. McCallum, author A. S. Dzurak,and author A. Morello, http://dx.doi.org/10.1038/nnano.2015.262 journal journal Nat. Nanotech. volume 11, pages 242 (year 2016b)NoStop [Mohiyaddin et al.(2013)Mohiyaddin, Rahman, Kalra, Klimeck, Hollenberg, Pla, Dzurak, and Morello]Mohiyaddin2013 author author F. A. Mohiyaddin, author R. Rahman, author R. Kalra, author G. Klimeck, author L. C. Hollenberg, author J. J. Pla, author A. S. Dzurak,and author A. Morello, http://pubs.acs.org/doi/abs/10.1021/nl303863s journal journal Nano Letters volume 13,pages 1903 (year 2013)NoStop [Elzerman et al.(2004)Elzerman, Hanson, Willems van Beveren, Witkamp, Vandersypen, and Kouwenhoven]Elzerman2004 author author J. M. Elzerman, author R. Hanson, author L. H. Willems van Beveren, author B. Witkamp, author L. M. K. Vandersypen, and author L. P. Kouwenhoven,http://dx.doi.org/10.1038/nature02693 journal journal Nature volume 430, pages 431 (year 2004)NoStop [Morello et al.(2009)Morello, Escott, Huebl, Willems van Beveren, Hollenberg, Jamieson, Dzurak, and Clark]Morello2009 author author A. Morello, author C. C. Escott, author H. Huebl, author L. H. Willems van Beveren, author L. C. L. Hollenberg, author D. N. Jamieson, author A. S. Dzurak,and author R. G. Clark, http://link.aps.org/doi/10.1103/PhysRevB.80.081307 journal journal Phys. Rev. B volume 80,pages 081307 (year 2009)NoStop [Morello et al.(2010)Morello, Pla, Zwanenburg, Chan, Tan, Huebl, Mottonen, Nugroho, Yang, van Donkelaar, Alves, Jamieson, Escott, Hollenberg, Clark, and Dzurak]Morello2010 author author A. Morello, author J. J. Pla, author F. A. Zwanenburg, author K. W. Chan, author K. Y. Tan, author H. Huebl, author M. Mottonen, author C. D.Nugroho, author C. Yang, author J. A. van Donkelaar, author A. D. C. Alves, author D. N. Jamieson, author C. C. Escott, author L. C. L. Hollenberg, author R. G. Clark,and author A. S. Dzurak, http://dx.doi.org/10.1038/nature09392 journal journal Nature volume 467, pages 687 (year 2010)NoStop [Pla et al.(2013)Pla, Tan, Dehollain, Lim, Morton, Zwanenburg, Jamieson, Dzurak, and Morello]Pla2013 author author J. J. Pla, author K. Y. Tan, author J. P. Dehollain, author W. H. Lim, author J. J. L. Morton, author F. A. Zwanenburg, author D. N. Jamieson, author A. S. Dzurak,and author A. Morello, http://dx.doi.org/10.1038/nature12011 journal journal Nature volume 496, pages 334 (year 2013)NoStop [Braginsky et al.(1980)Braginsky, Vorontsov, and Thorne]Braginsky1980 author author V. B. Braginsky, author Y. I. Vorontsov,and author K. S. Thorne, http://science.sciencemag.org/content/209/4456/547.abstract journal journal Science volume 209, pages 547 (year 1980)NoStop [Blok et al.(2014)Blok, Bonato, Markham, Twitchen, Dobrovitski, and Hanson]Blok2014 author author M. S. Blok, author C. Bonato, author M. L. Markham, author D. J. Twitchen, author V. V. Dobrovitski,and author R. Hanson, http://dx.doi.org/10.1038/nphys2881 journal journal Nat. Phys. volume 10, pages 189 (year 2014)NoStop [sup()]supp @nooptitle See supplemental material at [url will be inserted by publisher] for additional figures and text on steering probabilities, pulse sequences and density matrix calculations.NoStop [Korotkov and Jordan(2006)]Korotkov2006 author author A. N. Korotkov and author A. N. Jordan, http://link.aps.org/doi/10.1103/PhysRevLett.97.166805 journal journal Phys. Rev. Lett. volume 97, pages 166805 (year 2006)NoStop [Laucht et al.(2015)Laucht, Muhonen, Mohiyaddin, Kalra, Dehollain, Freer, Hudson, Veldhorst, Rahman, Klimeck, Itoh, Jamieson, McCallum, Dzurak, and Morello]Laucht2015 author author A. Laucht, author J. T. Muhonen, author F. A. Mohiyaddin, author R. Kalra, author J. P. Dehollain, author S. Freer, author F. E. Hudson, author M. Veldhorst, author R. Rahman, author G. Klimeck, author K. M.Itoh, author D. N. Jamieson, author J. C. McCallum, author A. S. Dzurak,and author A. Morello, http://advances.sciencemag.org/content/1/3/e1500022.abstract journal journal Science Advances volume 1,(year 2015)NoStop [Möttönen et al.(2010)Möttönen, Tan, Chan, Zwanenburg, Lim, Escott, Pirkkalainen, Morello, Yang, van Donkelaar, Alves, Jamieson, Hollenberg, and Dzurak]Mottonen2010 author author M. Möttönen, author K. Y. Tan, author K. W. Chan, author F. A. Zwanenburg, author W. H. Lim, author C. C. Escott, author J.-M. Pirkkalainen, author A. Morello, author C. Yang, author J. A.van Donkelaar, author A. D. C.Alves, author D. N. Jamieson, author L. C. L. Hollenberg,and author A. S. Dzurak, http://link.aps.org/doi/10.1103/PhysRevB.81.161304 journal journal Phys. Rev. B volume 81, pages 161304 (year 2010)NoStop [Ruskov et al.(2007)Ruskov, Mizel, and Korotkov]Ruskov2007 author author R. Ruskov, author A. Mizel, and author A. N. Korotkov,https://doi.org/10.1103/PhysRevB.75.220501 journal journal Phys. Rev. B volume 75,pages 220501 (year 2007)NoStop [Gammelmark et al.(2013)Gammelmark, Julsgaard, and Mølmer]Gammelmark2013 author author S. Gammelmark, author B. Julsgaard,and author K. Mølmer, http://link.aps.org/doi/10.1103/PhysRevLett.111.160401 journal journal Phys. Rev. Lett. volume 111, pages 160401 (year 2013)NoStopSUPPLEMENTARY MATERIAL: Coherent control via weak measurements in ^31P single-atom electron and nuclear spin qubits § SUCCESS PROBABILITIES Performing a conditional weak measurement is necessarily a probabilistic process. As mentioned in the main text, the success probability for a single measurement [starting from the nuclear spin state in equation (1) of the main text] is P_1 = [1+cos(θ/2)]/2. It is however notable that, as this probability depends on the nuclear spin populations at the start of the measurement, the success probability of two sequential weak measurements is not simply this value squared. Rather, the success probability for n sequential weak measurements in our case is P_n = [1+cos(θ/2)^2n]/2 if all measurements are performed with electron spin rotation θ on the same electron spin resonance frequency.For the measurement reversal (two weak nuclear measurements, each using a different ESR frequency) the success probability reads P_rev = cos^2(θ/2) which is notably zero for θ=π, as should be expected (one cannot reverse a projective measurement). These predictions together with data are plotted in Fig. 4.§ DENSITY MATRIX CALCULATIONS Below we refer to the nuclear spin state with the thick arrow (⇑ or ⇓) and the electron spin state with the narrow arrow (↑ or ↓). The Pauli operators are σ^i where i=e,n refers to either electron or nuclear spin, respectively. We start from the state Φ = 1/√(2)(|⇑⟩+|⇓⟩)⊗|↓⟩, i.e., in density matrix form (in the basis|⇑↑⟩|⇑↓⟩|⇓↑⟩|⇓↓⟩)ρ_0 = 1/2[ 0 0 0 0; 0 1 0 1; 0 0 0 0; 0 1 0 1; ].The conditional rotation matrix reads U(θ) = |⇓⟩⟨⇓|⊗ I + |⇑⟩⟨⇑|⊗ R(θ) where R(θ) is the rotation matrixR(θ) = [cos(θ) -sin(θ);sin(θ)cos(θ); ].Hence, after the initialization step and the conditional electron spin rotation of an angle θ the system state isρ_θ = U(θ)ρ_0 U^†(θ) = 1/2[ sin^2(θ/2) cos(θ/2)sin(θ/2)0 sin(θ/2); cos(θ/2)sin(θ/2) cos^2(θ/2)0 cos(θ/2);0000; sin(θ/2) cos(θ/2)01;].which is an entangled electron-nuclear state for all θ≠ 0,2π (according to the PPT criterion).If we then just simply trace out the electron (no conditioning), we obtain the nuclear spin state asρ_n^u = _2(ρ_θ) =1/2[ sin^2(θ/2) + cos^2(θ/2)cos(θ/2);cos(θ/2) 1; ] = 1/2[1 cos(θ/2); cos(θ/2)1;],showing that the expectation value of σ_z^n remains constant independently of θ, but the off-diagonal elements decay as a function of the measurement strength. In the limiting case of θ=π, we are left with a classical mixture of up and down nuclear spin states.More interestingly, tracing out the electron conditionally on measuring |↓⟩ we obtainρ_n^c = Tr_2[ρ_θ(I⊗|0⟩⟨0|)] = 1/1+cos^2(θ/2)[ cos^2(θ/2) cos(θ/2); cos(θ/2) 1,;]which is the state mentioned in the main text. The second measurement is then simply done by repeating the process starting from this stateρ_θ^(2) = U(θ)(ρ_n^c⊗|↓⟩) U^†(θ)and tracing out similarly. For the measurement reversal, we need the rotation matrix for the other electron spin resonance frequency, which readsU(θ) = |⇓⟩⟨⇓|⊗ R(θ) + |⇑⟩⟨⇑|⊗ I. Otherwise the procedure is the same. Expectation values for the nuclear spin components for one or two measurements with rotation θ are plotted in Fig. 5. The measurement reversal should just preserve all three components. These are also plotted with the data in main figure 2.Finally, if we also add a finite electron tunnel-out probability to process described above, we obtainρ_n^c= Tr_2 {ρ_θ[ I⊗( |0⟩⟨0|+exp(-Γ t)|1⟩⟨1|) ] }= 1/1+cos^2(θ/2)+exp(-Γ t)sin^2(θ/2)[ cos^2(θ/2)+exp(-Γ t)sin^2(θ/2) cos(θ/2); cos(θ/2)1;].Note that, unlike all the previous states, this one is not pure unless exp(-Γ t)sin^2(θ/2) = 0.§ NOTES ON EPR STEERING The use of the word “steering” in the context of quantum systems is somewhat ambigous in the existing literature. The experiments in this paper demonstrate coherent control of a qubit state by measuring another, correlated, qubit state. This is in many contexts called steering and this usage of the word indeed makes intuitive sense; one is steering the nuclear spin (qubit) by weakly measuring it via the electron (ancilla).However, it is also common that the word steering - in the quantum context - exclusively refers to what is more exactly known as EPR steering. In the operational definition of Wiseman et al., EPR steering consists of a “game” where Alice must convince Bob that she has shared with him an entangled state. To do so, she wants to show Bob that she has the ability to control his quantum state by choosing which measurement to perform at her end. This, in turn, can be formalized in experimentally testable EPR steering inequalities.A demonstration of EPR steering could be conducted on the ^31P electron-nuclear system, where “Alice” is the electron spin and “Bob” is the nuclear spin, by following three steps:(i) Initialize the electron-nuclear system in a maximally entangled Bell state, for example |Φ^+⟩ = (|↓⇓⟩ + |↑⇑⟩)/√(2), as described in the main text. (ii) Define different measurement axes for the electron spin. This requires an unconditional electron spin rotation, which could be obtained by simultaneously applying ESR pulses of rotation angle θ on both ν_ e1 and ν_ e2, before a projective electron spin measurement. This is the key difference between EPR steering and the experiments shown in the main text, where all electron spin rotations were conditional on the nuclear spin state. This is because simultaneous excitation of ν_ e1 and ν_ e2 was not feasible in our setup. (iii) Conditioned on measuring electron spin |↓⟩, perform nuclear state tomography. Supplementary figure 6 shows the expected nuclear spin components as a function of θ. At θ = 0 the electron spin measurement is along the z-axis and therefore the subsequent measurement of σ_z^n could be predicted by unity accuracy, whereas the measurement of σ_x^n is completely undetermined. At θ = π/2 the electron spin measurement is along the x-axis, and now the reverse is true. This simple simulation captures the essence of EPR steering. The state of Bob's particle tracks exactly the choice of measurement basis made by Alice.We note that the violation of Bell's inequality has already been demonstrated with the electron-nuclear system studied here, and it is known that the requirements for EPR-steering are less strict than those for Bell inequalities. Therefore, using e.g. two separate microwave sources to excite ν_ e1 and ν_ e2 simultaneously, it should be possible to demonstrate EPR steering in the ^31P system.
http://arxiv.org/abs/1702.07991v1
{ "authors": [ "J. T. Muhonen", "J. P. Dehollain", "A. Laucht", "S. Simmons", "R. Kalra", "F. E. Hudson", "D. N. Jamieson", "J. C. McCallum", "K. M. Itoh", "A. S. Dzurak", "A. Morello" ], "categories": [ "quant-ph", "cond-mat.mes-hall" ], "primary_category": "quant-ph", "published": "20170226060901", "title": "Coherent control via weak measurements in $^{31}$P single-atom electron and nuclear spin qubits" }
[ Improved Variational Autoencoders for Text Modeling using Dilated Convolutions Zichao Yangcmu Zhiting Hucmu Ruslan Salakhutdinovcmu Taylor Berg-Kirkpatrickcmu cmuCarnegie Mellon UniversityZichao Yangzichaoy@cs.cmu.eduboring formatting information, machine learning, ICML0.3in ] Recent work on generative text modeling has found that variational autoencoders (VAE) with LSTM decoders perform worse than simpler LSTM language models <cit.>. This negative result is so far poorly understood, but has been attributed to the propensity of LSTM decoders to ignore conditioning information from the encoder. In this paper, we experiment with a new type of decoder for VAE: a dilated CNN. By changing the decoder's dilation architecture, we control the size of context from previously generated words. In experiments, we find that there is a trade-off between contextual capacity of the decoder and effective use of encoding information. We show that when carefully managed, VAEs can outperform LSTM language models. We demonstrate perplexity gains on two datasets, representing the first positive language modeling result with VAE.Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines. § INTRODUCTIONGenerative models play an important role in NLP, both in their use as language models and because of their ability to effectively learn from unlabeled data. By parameterzing generative models using neural nets, recent work has proposed model classes that are particularly expressive and can pontentially model a wide range of phenomena in language and other modalities. We focus on a specific instance of this class: the variational autoencoder[The name VAE is often used to refer to both a model class and an associated inference procedure.] (VAE) <cit.>. The generative story behind the VAE (to be described in detail in the next section) is simple: First, a continuous latent representation is sampled from a multivariate Gaussian. Then, an output is sampled from a distribution parameterized by a neural decoder, conditioned on the latent representation. The latent representation (treated as a latent variable during training) is intended to give the model more expressive capacity when compared with simpler neural generative models–for example, conditional language models. The choice of decoding architecture and final output distribution, which connect the latent representation to output, depends on the kind of data being modeled. The VAE owes its name to an accompanying variational technique  <cit.> that has been successfully used to train such models on image data  <cit.>. The application of VAEs to text data has been far less successful <cit.>. The obvious choice for decoding architecture for a textual VAE is an LSTM, a typical workhorse in NLP. However, <cit.> found that using an LSTM-VAE for text modeling yields higher perplexity on held-out data than using an LSTM language model. In particular, they observe that the LSTM decoder in VAE does not make effective use of the latent representation during training and, as a result, VAE collapses into a simple language model.Related work <cit.> has used simpler decoders that model text as a bag of words. Their results indicate better use of latent representations, but their decoders cannot effectively model longer-range dependencies in text and thus underperform in terms of final perplexity.Motivated by these observations, we hypothesize that the contextual capacity of the decoder plays an important role in whether VAEs effectively condition on the latent representation when trained on text data. We propose the use of a dilated CNN as a decoder in VAE, inspired by the recent success of using CNNs for audio, image and language modeling  <cit.>. In contrast with prior work where extremely large CNNs are used, we exploit the dilated CNN for its flexibility in varying the amount of conditioning context. In the two extremes, depending on the choice of dilation, the CNN decoder can reproduce a simple MLP using a bags of words representation of text, or can reproduce the long-range dependence of recurrent architectures (like an LSTM) by conditioning on the entire history. Thus, by choosing a dilated CNN as the decoder, we are able to conduct experiments where we vary contextual capacity, finding a sweet spot where the decoder can accurately model text but does not yet overpower the latent representation.We demonstrate that when this trade-off is correctly managed, textual VAEs can perform substantially better than simple LSTM language models, a finding consistent with recent image modeling experiments using variational lossy autoencoders <cit.>. We go on to show that VAEs with carefully selected CNN decoders can be quite effective for semi-supervised classification and unsupervised clustering, outperforming several strong baselines (from  <cit.>) on both text categorization and sentiment analysis.Our contributions are as follows: First, we propose the use of a dilated CNN as a new decoder for VAE. We then empirically evaluate several dilation architectures with different capacities, finding that reduced contextual capacity leads to stronger reliance on latent representations. By picking a decoder with suitable contextual capacity, we find our VAE performs better than LSTM language models on two data sets. We also explore the use of dilated CNN VAEs for semi-supervised classification and find they perform better than strong baselines from  <cit.>. Finally, we verify that the same framework can be used effectively for unsupervised clustering. § MODELIn this section, we begin by providing background on the use of variational autoencoders for language modeling. Then we introduce the dilated CNN architecture that we will use as a new decoder for VAE in experiments. Finally, we describe the generalization of VAE that we will use to conduct experiments on semi-supervised classification.§.§ Background on Variational AutoencodersNeural language models <cit.> typically generate each token x_t conditioned on the entire history of previously generated tokens:p(𝐱) = ∏_tp(x_t | x_1, x_2, ..., x_t-1).State-of-the-art language models often parametrize these conditional probabilities using RNNs, which compute an evolving hidden state over the text which is used to predict each x_t. This approach, though effective in modeling text, does not explicitly model variance in higher-level properties of entire utterances (e.g. topic or style) and thus can have difficulty with heterogeneous datasets.<cit.> propose a different approach to generative text modeling inspired by related work on vision <cit.>. Instead of directly modeling the joint probability p(𝐱) as in Equation <ref>, we specify a generative process for which p(𝐱) is a marginal distribution. Specifically, we first generate a continuous latent vector representation 𝐳 from a multivariate Gaussian prior p_θ(𝐳), and then generate the text sequence 𝐱from a conditional distribution p_θ(𝐱 | 𝐳) parameterized using a neural net (often called the generation model or decoder). Because this model incorporates a latent variable that modulates the entire generation of each whole utterance, it may be better able to capture high-level sources of variation in the data. Specifically, in contrast with Equation <ref>, this generating distribution conditions on latent vector representation 𝐳:p_θ(𝐱 |𝐳) = ∏_tp_θ(x_t | x_1, x_2, ..., x_t-1, 𝐳). To estimate model parameters θ we would ideally like to maximize the marginal probability p_θ(𝐱) = ∫ p_θ(𝐳) p_θ(𝐱| 𝐳) d𝐳. However, computing this marginal is intractable for many decoder choices. Thus, the following variational lower bound is often used as an objective <cit.>:log p_θ(𝐱) = -log∫p_θ(𝐳) p_θ(𝐱| 𝐳) d𝐳≥ _q_ϕ(𝐳|𝐱) [log p_θ(𝐱|𝐳)]- KL(q_ϕ(𝐳|𝐱) || p_θ(𝐳)).Here, q_ϕ(𝐳 | 𝐱) is an approximation to the true posterior (often called the recognition model or encoder) and is parameterized by ϕ. Like the decoder, we have a choice of neural architecture to parameterize the encoder. However, unlike the decoder, the choice of encoder does not change the model class – it only changes the variational approximation used in training, which is a function of both the model parameters θ and the approximation parameters ϕ. Training seeks to optimize these parameters jointly using stochastic gradient ascent. A final wrinkle of the training procedure involves a stochastic approximation to the gradients of the variational objective (which is itself intractable). We omit details here, noting only that the final distribution of the posterior approximation q_ϕ(𝐳|𝐱) is typically assumed to be Gaussian so that a re-parametrization trick can be used, and refer readers to  <cit.>. §.§ Training Collapse with Textual VAEs Together, this combination of generative model and variational inference procedure are often referred to as a variational autoencoder (VAE). We can also view the VAE as a regularized version of the autoencoder. Note, however, that while VAEs are valid probabilistic models whose likelihood can be evaluated on held-out data, autoencoders are not valid models. If only the first term of the VAE variational bound _q_ϕ(𝐳|𝐱)[log p_θ(𝐱|𝐳)] is used as an objective, the variance of the posterior probability q_ϕ(𝐳|𝐱) will become small and the training procedure reduces to an autoencoder. It is the KL-divergence term, KL(q_ϕ(𝐳|𝐱) || p_θ(𝐳)), that discourages the VAE memorizing each 𝐱 as a single latent point.While the KL term is critical for training VAEs, historically, instability on text has been evidenced by the KL term becoming vanishingly small during training, as observed by  <cit.>. When the training procedure collapses in this way, the result is an encoder that has duplicated the Gaussian prior (instead of a more interesting posterior), a decoder that completely ignores the latent variable 𝐳, and a learned model that reduces to a simpler language model. We hypothesize that this collapse condition is related to the contextual capacity of the decoder architecture. The choice encoder and decoder depends on the type of data. For images, these are typically MLPs or CNNs. LSTMs have been used for text, but have resulted in training collapse as discussed above <cit.>. Here, we propose to use a dilated CNN as the decoder instead. In one extreme, when the effective contextual width of a CNN is very large, it resembles the behavior of LSTM. When the width is very small, it behaves like a bag-of-words model. The architectural flexibility of dilated CNNs allows us to change the contextual capacity and conduct experiments to validate our hypothesis: decoder contextual capacity and effective use of encoding information are directly related.We next describe the details of our decoder.§.§ Dilated Convolutional DecodersThe typical approach to using CNNs used for text generation <cit.> is similar to that used for images <cit.>, but with the convolution applied in one dimension. We take this approach here in defining our decoder. One dimensional convolution: For a CNN to serve as a decoder for text, generation of x_t must only condition on past tokens x_<t. Applying the traditional convolution will break this assumption and use tokens x_≥ t as inputs to predict x_t. In our decoder, we avoid this by simply shifting the input by several slots <cit.>. With a convolution with filter size of k and using n layers, our effective filter size (the number of past tokens to condition to in predicting x_t) would be (k-1)× n + 1. Hence, the filter size would grow linearly with the depth of the network. Dilation: Dilated convolution <cit.> was introduced to greatly increase the effective receptive field size without increasing the computational cost. With dilation d, the convolution is applied so that d-1 inputs are skipped each step. Causal convolution can be seen a special case with d=1. With dilation, the effective receptive size grows exponentially with network depth. In Figure <ref>, we show dilation of sizes of 1 and 2 in the first and second layer, respectively. Suppose the dilation size in the i-th layer is d_i and we use the same filter size k in all layers, then the effective filter size is (k-1)∑_id_i + 1. The dilations are typically set to double every layer d_i+1 = 2d_i, so the effective receptive field size can grow exponentially. Hence, the contextual capacity of a CNN can be controlled across a greater range by manipulating the filter size, dilation size and network depth. We use this approach in experiments. Residual connection: We use residual connection <cit.> in the decoder r0.13< g r a p h i c s >to speed up convergence and enable training of deeper models. We use a residual block (shown to the right) similar to that of <cit.>. We use three convolutional layers with filter size 1×1, 1× k, 1× 1, respectively, and ReLU activation between convolutional layers.Overall architecture: Our VAE architecture is shown in Figure <ref>. We use LSTM as the encoder to get the posterior probability q(𝐳|𝐱), which we assume to be diagonal Gaussian. We parametrize the mean μ and variance σ with LSTM output. We sample 𝐳 from q(𝐳|𝐱), the decoder is conditioned on the sample by concatenating 𝐳 with every word embedding of the decoder input. §.§ Semi-supervised VAEIn addition to conducting language modeling experiments, we will also conduct experiments on semi-supervised classification of text using our proposed decoder. In this section, we briefly review semi-supervised VAEs of <cit.> that incorporate discrete labels as additional variables. Given the labeled set (x, y) ∼ D_L and the unlabeled set x ∼ D_U, <cit.> proposed a model whose latent representation contains continuous vector 𝐳 and discrete label 𝐲:p(𝐱, 𝐲, 𝐳) = p(𝐲) p(𝐳) p(𝐱|𝐲, 𝐳).The semi-supervised VAE fits a discriminative network q(𝐲 | 𝐱), an inference network q(𝐳| 𝐱, 𝐲) and a generative network p(𝐱|𝐲,𝐳) jointly as part of optimizing a variational lower bound similar that of basic VAE. For labeled data (𝐱, 𝐲), this bound is:log p(𝐱, 𝐲) ≥ _q(𝐳|𝐱, 𝐲) [ log p(𝐱 | 𝐲, 𝐳)] - KL(q(𝐳|𝐱, 𝐲) || p(𝐳)) + log p(𝐲) =L(𝐱, 𝐲) + log p(𝐲).For unlabeled data 𝐱, the label is treated as a latent variable, yielding:log p(𝐱) ≥U(𝐱) =_q(𝐲|𝐱)[_q(𝐳|𝐱, 𝐲) [ log p(𝐱 | 𝐲, 𝐳)]- KL(q(𝐳|𝐱, 𝐲) ||p(𝐳)) + log p(𝐲) - logq(𝐲|𝐱)] =∑_yq(𝐲|𝐱) L(𝐱, 𝐲) - KL(q(𝐲|𝐱) || p(𝐲)). Combining the labeled and unlabeled data terms, we have the overall objective as:J = _(𝐱, 𝐲) ∼ D_L[L(𝐱, 𝐲)] + _𝐱∼ D_U[U(𝐱)]+ α_(𝐱, 𝐲)∼ D_L[log q(𝐲|𝐱)],where α controls the trade off between generative and discriminative terms.Gumbel-softmax: <cit.> propose a continuous approximation to sampling from a categorical distribution. Let u be a categorical distribution with probabilities π_1, π_2, ..., π_c. Samples from u can be approximated using:y_i = exp((log (π_i) + g_i) / τ)/∑_j=1^cexp((log(π_j) + g_j)/τ),where g_i follows Gumbel(0, 1). The approximation is accurate when τ→ 0 and smooth when τ > 0. In experiments, we use Gumbel-Softmax to approximate the samples from p(𝐲|𝐱) to reduce the computational cost. As a result, we can directly back propagate the gradients of U(𝐱) to the discriminator network. We anneal τ so that sample variance is small when training starts and then gradually decrease τ.Unsupervised clustering: In this section we adapt the same framework for unsupervised clustering. We directly minimize the objective U(𝐱), which is consisted of two parts: reconstruction loss and KL regularization on q(𝐲|𝐱). The first part encourages the model to assign 𝐱 to label 𝐲 such that the reconstruction loss is low. We find that the model can easily get stuck in two local optimum: the KL term is very small and q(𝐲 | 𝐱) is close to uniform distribution or the KL term is very large and all samples collapse to one class. In order to make the model more robust, we modify the KL term by:KL_𝐲 = max(γ, KL(q(𝐲|𝐱) | p(𝐲)).That is, we only minimize the KL term when it is large enough.§ EXPERIMENTS §.§ Data sets Since we would like to investigate VAEs for language modeling and semi-supervised classification, the data sets should be suitable for both purposes. We use two large scale document classification data sets: Yahoo Answer and Yelp15 review, representing topic classification and sentiment classification data sets respectively <cit.>. The original data sets contain millions of samples, of which we sample 100k as training and 10k as validation and test from the respective partitions. The detailed statistics of both data sets are in Table <ref>. Yahoo Answer contains 10 topics including Society & Culture, Science & Mathematics etc. Yelp15 contains 5 level of rating, with higher rating better. §.§ Model configurations and Training detailsWe use an LSTM as an encoder for VAE and explore LSTMs and CNNs as decoders. For CNNs, we explore several different configurations. We set the convolution filter size to be 3 and gradually increase the depth and dilation from [1, 2, 4], [1, 2, 4, 8, 16] to [1, 2, 4, 8, 16, 1, 2, 4, 8, 16]. They represent small, medium and large model and we name them as SCNN, MCNN and LCNN. We also explore a very large model with dilations [1, 2, 4, 8, 16, 1, 2, 4, 8, 16, 1, 2, 4, 8, 16] and name it as VLCNN. The effective filter size are 15, 63, 125 and 187 respectively. We use the last hidden state of the encoder LSTM and feed it though an MLP to get the mean and variance of q(𝐳|𝐱), from which we sample 𝐳 and then feed it through an MLP to get the starting state of decoder. For the LSTM decoder, we follow <cit.> to use it as the initial state of LSTM and feed it to every step of LSTM. For the CNN decoder, we concatenate it with the word embedding of every decoder input.The architecture of the Semi-supervised VAE basically follows that of the VAE. We feed the last hidden state of the encoder LSTM through a two layer MLP then a softmax to get q(𝐲|𝐱). We use Gumbel-softmax to sample 𝐲 from q(𝐲|𝐱). We then concatenate 𝐲 with the last hidden state of encoder LSTM and feed them throught an MLP to get the mean and variance of q(𝐳|𝐲,𝐱). 𝐲 and 𝐳 together are used as the starting state of the decoder.We use a vocabulary size of 20k for both data sets and set the word embedding dimension to be 512. The LSTM dimension is 1024. The number of channels for convolutions in CNN decoders is 512 internally and 1024 externally, as shown in Section <ref>. We select the dimension of 𝐳 from [32, 64]. We find our model is not sensitive to this parameter.We use Adam <cit.> to optimize all models and the learning rate is selected from [2e-3, 1e-3, 7.5e-4] and β_1 is selected from [0.5, 0.9]. Empirically, we find learning rate 1e-3 and β_1=0.5 to perform the best. We select drop out ratio of LSTMs (both encoder and decoder) from [0.3, 0.5]. Following <cit.>, we also use drop word for the LSTM decoder, the drop word ratio is selected from [0, 0.3, 0.5, 0.7]. For the CNN decoder, we use a drop out ratio of 0.1 at each layer. We do not use drop word for CNN decoders. We use batch size of 32 and all model are trained for 40 epochs. We start to half the learning rate every 2 epochs after epoch 30. Following <cit.>, we use KL cost annealing strategy. We set the initial weight of KL cost term to be 0.01 and increase it linearly until a given iteration T. We treat T as a hyper parameter and select it from [10k, 40k, 80k]. §.§ Language modeling results The results for language modeling are shown in Table <ref>. We report the negative log likelihood (NLL) and perplexity (PPL) of the test set. For the NLL of VAEs, we decompose it into reconstruction loss and KL divergence and report the KL divergence in the parenthesis. To better visualize these results, we plot the results of Yahoo data set (Table <ref>) in Figure <ref>.We first look at the LM results for Yahoo data set. As we gradually increase the effective filter size of CNN from SCNN, MCNN to LCNN, the NLL decreases from 345.3, 338.3 to 335.4. The NLL of LCNN-LM is very close to the NLL of LSTM-LM 334.9. But VLCNN-LM is a little bit worse than LCNN-LM, this indicates a little bit of over-fitting.We can see that LSTM-VAE is worse than LSTM-LM in terms of NLL and the KL term is nearly zero, which verifies the finding of <cit.>. When we use CNNs as the decoders for VAEs, we can see improvement over pure CNN LMs. For SCNN, MCNN and LCNN, the VAE results improve over LM results from 345.3 to 337.8, 338.3 to 336.2, and 335.4 to 333.9 respectively. The improvement is big for small models and gradually decreases as we increase the decoder model contextual capacity. When the model is as large as VLCNN, the improvement diminishes and the VAE result is almost the same with LM result. This is also reflected in the KL term, SCNN-VAE has the largest KL of 13.3 and VLCNN-VAE has the smallest KL of 0.7. When LCNN is used as the decoder, we obtain an optimal trade off between using contextual information and latent representation. LCNN-VAE achieves a NLL of 333.9, which improves over LSTM-LM with NLL of 334.9.We find that if we initialize the parameters of LSTM encoder with parameters of LSTM language model, we can improve the VAE results further. This indicates better encoder model is also a key factor for VAEs to work well. Combined with encoder initialization, LCNN-VAE improves over LSTM-LM from 334.9 to 332.1 in NLL and from 66.2 to 63.9 in PPL. Similar results for the sentiment data set are shown in Table <ref>. LCNN-VAE improves over LSTM-LM from 362.7 to 359.1 in NLL and from 42.6 to 41.1 in PPL. Latent representation visualization: In order to visualize the latent representation, we set the dimension of 𝐳 to be 2 and plot the mean of posterior probability q(𝐳|𝐱), as shown in Figure <ref>. We can see distinct different characteristics of topic and sentiment representation. In Figure <ref>, we can see that documents of different topics fall into different clusters, while in Figure <ref>, documents of different ratings form a continuum, they lie continuously on the x-axis as the review rating increases.§.§ Semi-supervised VAE resultsMotivated by the success of VAEs for language modeling, we continue to explore VAEs for semi-supervised learning. Following that of <cit.>, we set the number of labeled samples to be 100, 500, 1000 and 2000 respectively. Ablation Study: At first, we would like to explore the effect of different decoders for semi-supervised classification. We fix the number of labeled samples to be 500 and report both classification accuracy and NLL of the test set of Yahoo data set in Table. <ref>. We can see that SCNN-VAE-Semi has the best classification accuracy of 65.5. The accuracy decreases as we gradually increase the decoder contextual capacity. On the other hand, LCNN-VAE-Semi has the best NLL result. This classification accuracy and NLL trade off once again verifies our conjecture: with small contextual window size, the decoder is forced to use the encoder information, hence the latent representation is better learned.Comparing the NLL results of Table <ref> with that of Table <ref>, we can see the NLL improves. The NLL of semi-supervised VAE improves over simple VAE from 337.8 to 335.7 for SCNN, from 336.2 to 332.8 for MCNN, and from 333.9 to 332.8 for LCNN. The improvement mainly comes from the KL divergence part, this indicates that better latent representations decrease the KL divergence, further improving the VAE results. Comparison with related methods: We compare Semi-supervised VAE with the methods from <cit.>, which represent the previous state-of-the-art for semi-supervised sequence learning. <cit.> pre-trains a classifier by initializing the parameters of a classifier with that of a language model or a sequence autoencoder. They find it improves the classification accuracy significantly. Since SCNN-VAE-Semi performs the best according to Table <ref>, we fix the decoder to be SCNN in this part. The detailed comparison is in Table <ref>. We can see that semi-supervised VAE performs better than LM-LSTM and LA-LSTM from <cit.>. We also initialize the encoder of the VAE with parameters from LM and find classification accuracy further improves. We also see the advantage of SCNN-VAE-Semi over LM-LSTM is greater when the number of labeled samples is smaller. The advantage decreases as we increase the number of labeled samples. When we set the number of labeled samples to be 25k, the SCNN-VAE-Semi achieves an accuracy of 70.4, which is similar to LM-LSTM with an accuracy of 70.5. Also, SCNN-VAE-Semi performs better on Yahoo data set than Yelp data set. For Yelp, SCNN-VAE-Semi is a little bit worse than LM-LSTM if the number of labeled samples is greater than 100, but becomes better when we initialize the encoder. Figure <ref> explains this observation. It shows the documents are coupled together and are harder to classify. Also, the latent representation contains information other than sentiment, which may not be useful for classification.§.§ Unsupervised clustering resultsWe also explored using the same framework for unsupervised clustering. We compare with the baselines that extract the feature with existing models and then run Gaussian Mixture Model (GMM) on these features. We find empirically that simply using the features does not perform well since the features are high dimensional. We run a PCA on these features, the dimension of PCA is selected from [8, 16, 32]. Since GMM can easily get stuck in poor local optimum, we run each model ten times and report the best result. We find directly optimizing U(𝐱) does not perform well for unsupervised clustering and we need to initialize the encoder with LSTM language model. The model only works well for Yahoo data set. This is potentially because Figure <ref> shows that sentiment latent representations does not fall into clusters. γ in Equation <ref> is a sensitive parameter, we select it from the range between 0.5 and 1.5 with an interval of 0.1. We use the following evaluation protocol <cit.>: after we finish training, for cluster i, we find out the validation sample 𝐱_n from cluster i that has the best q(y_i|𝐱) and assign the label of 𝐱_n to all samples in cluster i. We then compute the test accuracy based on this assignment. The detailed results are in Table <ref>. We can see SCNN-VAE-Unsup + init performs better than other baselines. LSTM+GMM performs very bad probably because the feature dimension is 1024 and is too high for GMM, even though we already used PCA to reduce the dimension. Conditional text generation With the semi-supervised VAE, we are able to generate text conditional on the label. Due to space limitation, we only show one example of generated reviews conditioning on review rating in Table <ref>. For each group of generated text, we fix 𝐳 and vary the label 𝐲, while picking 𝐱 via beam search with a beam size of 10.§ RELATED WORKVariational inference via the re-parameterization trick was initially proposed by <cit.> and since then, VAE has been widely adopted as generative model for images <cit.>.Our work is in line with previous works on combining variational inferences with text modeling <cit.>. <cit.> is the first work to combine VAE with language model and they use LSTM as the decoder and find some negative results. On the other hand, <cit.> models text as bag of words, though improvement has been found, the model can not be used to generate text.Our work fills the gaps between them. <cit.> applies variational inference to dialogue modeling and machine translation and found some improvement in terms of generated text quality, but no language modeling results are reported. <cit.> embedded variational units in every step of a RNN, which is different from our model in using global latent variables to learn high level features.Our use of CNN as decoder is inspired by recent success of PixelCNN model for images <cit.>, WaveNet for audios <cit.>, Video Pixel Network for video modeling <cit.> and ByteNet for machine translation <cit.>. But in contrast to those works showing using a very deep architecture leads to better performance, CNN as decoder is used in our model to control the contextual capacity, leading to better performance.Our work is closed related the recently proposed variational lossy autoencoder <cit.> which is used to predict image pixels. They find that conditioning on a smaller window of a pixels leads to better results with VAE, which is similar to our finding. Much <cit.> has been done to come up more powerful prior/posterior distribution representations with techniques such as normalizing flows. We treat this as one of our future works. This work is largely orthogonal and could be potentially combined with a more effective choice of decoder to yield additional gains.There is much previous work exploring unsupervised sentence encodings, for example skip-thought vectors <cit.>, paragraph vectors <cit.>, and sequence autoencoders <cit.>. <cit.> applies a pretrained model to semi-supervised classification and find significant gains, we use this as the baseline for our semi-supervised VAE. § CONCLUSION We showed that by controlling the decoder's contextual capacity in VAE, we can improve performance on both language modeling and semi-supervised classification tasks by preventing a degenerate collapse of the training procedure. These results indicate that more carefully characterizing decoder capacity and understanding how it relates to common variational training procedures may represent important avenues for unlocking future unsupervised problems. icml2017
http://arxiv.org/abs/1702.08139v2
{ "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "categories": [ "cs.NE", "cs.CL", "cs.LG" ], "primary_category": "cs.NE", "published": "20170227041601", "title": "Improved Variational Autoencoders for Text Modeling using Dilated Convolutions" }
Gain and lossin open quantum systemsHichem Eleuch^1[email: hichemeleuch@tamu.edu] andIngrid Rotter^2[email: rotter@pks.mpg.de, corresponding author] December 30, 2023 =======================================================================================================================In this article, we use Harrison cohomology to provide a framework for commutative deformations. In particular, Kontsevich's result that formality of (the Hochschild complex of) an associative algebra implies its deformability is adapted for commutative algebras, with the Harrison complex. Keywords: formality, Harrison cohomology, commutative deformations, eulerian idempotents2010 AMS Subject Classification: 13D03, 13D10, 16T10§ INTRODUCTION Kontsevich showed in <cit.> the existence of an associative deformation quantization for the general case of smooth Poisson manifolds. He deduced this result from his general “formality statement”. Endowed with the Gerstenhaber bracket, the continuous Hochschild complex of the algebra A = 𝒞^∞(M) of smooth functions over a Poisson manifold admits a graded Lie algebraic structure, which controls the deformations of the associative commutative algebra A. Kontsevich shows that this complex is linked with its cohomology – which therefore controls the same deformations – by a L_∞-quasi-isomorphism, called a formality map.Considering formality, the case of smooth manifolds is thus rather well understood using continuous Hochschild cohomology, and it is this tool which gives a lot of information about deformability (obstructions, rigidity,…). Moreover, if the Hochschild complex of an associative algebra is formal in Kontsevich's sense, this algebra admits a quantization by deformation, but the converse does not hold, for example in the case of free algebras, see <cit.>.Since formality methods work well to give complete answers to the deformation quantization in the regular case (both 𝒞^∞ and algebraic) it seems to be interesting – as proposed by Frønsdal and Kontsevich in <cit.> – to look at the deformation quantization problem for more general singular Poisson manifolds. The main problem is the fact that the HKR result of a “simple” Hochschild cohomology of the algebra of functions, generated by derivations, no longer holds, for example there may be non-trivial 2-cocycles which are symmetric. These symmetric cocycles are infinitesimal commutative deformations of the algebra of functions. In order to systematically investigate commutative associative algebras, Harrison (<cit.>) described combinatorially the “commutative component” of the Hochschild complex, and proved that its cohomology is reduced to derivations if and only if the algebra is “regular”. The main goal of this work is to adapt the result that formality implies deformation to the case of a commutative algebra, replacing Hochschild complex by Harrison complex.I am grateful to Prof. Bordemann for his help and useful remarks.In <ref> we recall Hochschild and Harrison (co)homology. <ref> and <ref> introduce tools coming from Hopf algebra theory: (co)freeness, convolution products, eulerian idempotents. This gives two descriptions of the Harrison complex, providing a short proof of a result of Barr. Finally, <ref> presents commutative deformations, and the aforementioned result <ref>.Letbe a field containing the rationals.§ HOCHSCHILD AND HARRISON (CO)HOMOLOGYLet A be a commutative -algebra, and consider its Hochschild complex _(A) with _n(A) = A ⊗ A^⊗ n. Loday recalls in <cit.> the action of the symmetric group 𝔖_n on _n(A)𝔖_n ↷_n(A)→_n(A)σ.(a_0,a_1,…,a_n) = (a_0,a_σ^-1(1),…,a_σ^-1(n))as well as the shuffle productsh_p,q : _p(A) ×_q(A)→_p+q(A) (a_0,a_1,…,a_p) ∙ (a'_0,a_p+1,…,a_p+q) = ∑_σ∈ Sh_p,qsgn(σ) σ.(a_0 a'_0,a_1,…,a_p+q)where Sh_p,q are the (p,q)-shuffles, elements σ of 𝔖_p+q such that σ(1)<…<σ(p) and σ(p+1)<…<σ(p+q) ; and he also defines the shuffle mapsh = ∑_p+q = n p⩾ 1,q⩾ 1sh_p,q : _n(A) →_n(A) as the action of the element sh = ∑_σ∈ Sh_p,qp+q = n p⩾ 1,q⩾ 1sgn(σ) σ∈[𝔖_n] Endowed with the shuffle product (often noted ∙), the Hochschild complex is a commutative differential graded algebra augmented over A. Let I=⊕_n>0_n(A) be the augmentation ideal. The quotient (A) =(A)/I^∙ 2 is a well defined complex since the Hochschild boundary map is a graded derivation for the shuffle product.For any A-module M, Hochschild homology and cohomology are given by _(A,M) = H(_(A)⊗_A M) and ^(A,M) = H(_A(_(A),M)). The Harrison homology and cohomology are defined as _(A,M) = H(_(A)⊗_A M) and ^(A,M) = H(_A(_(A),M)).Barr already proved in <cit.> that there are maps _(A,M) ↠_(A,M) and ^(A,M) ↪^(A,M).§ TENSORIAL BIALGEBRASLet V be a -vector space. The tensorial module over V is given by V = ⊕_n ∈ V^⊗ n. §.§ Freeness and cofreeness Endowed with the multiplication μ of concatenation, ( V,μ,) is the free associative algebra over V, characterized (up to isomorphism) by the universal property that each morphism ϕ from V to an associative algebra (A,μ_A) factors through V in ϕ = _V ∘ϕ.Endowed with the comultiplication Δ of deconcatenation, ( V,Δ,ε) is the cofree coassociative conilpotent coalgebra over V, characterized (up to isomorphism) by the universal property that each morphism ϕ from a coaugmented conilpotent coalgebra (C,Δ_C) to V, satisfying ϕ(1)=0, factors through V in ϕ = _V ∘ϕ.Likewise, for any linear map d : V → A, there exists a unique graded derivation along ϕ noted d :V → A such that d|_V = d; and for any linear map d : C^+ → V (with C^+ = ε_C), there exists a unique graded coderivation along ϕ noted d : C → V such that _V ∘d = d. [>=angle 90,baseline=(current bounding box.center)](m) [matrix of math nodes,nodes in empty cells,column sep=2em,row sep=2em,text height=1.5ex,text depth=0.25ex,ampersand replacement=&] ( V,μ) && (A,μ_A)& V; [->] (m-1-1) edge node [above] ϕ,d (m-1-3); [right hook->] (m-2-2) edge (m-1-1); [->] (m-2-2) edge node [below right] ϕ,d (m-1-3); [>=angle 90,baseline=(current bounding box.center)](m) [matrix of math nodes,nodes in empty cells,column sep=2em,row sep=2em,text height=1.5ex,text depth=0.25ex,ampersand replacement=&] ( V,Δ) && (C,Δ_C)& V; [->] (m-1-3) edge node [above] ϕ,d (m-1-1); [->>] (m-1-1) edge (m-2-2); [->] (m-1-3) edge node [below right] ϕ,d (m-2-2);d∘μ = μ_A ∘ (d⊗ϕ + ϕ⊗d) Δ∘d = (d⊗ϕ + ϕ⊗d) ∘Δ_C More details on these structures can be found in <cit.>. The emphasis is put on the following formulas using convolutions products. For both the algebra and coalgebra setting the formulas are the same, only the convolution products changes. For more detailed proofs, see <cit.>.The algebra morphism ϕ induced by ϕ : V → A is computed as ϕ = ∑_n ∈ϕ^⋆ n, the geometric serie using the convolution product ⋆ with respect to the multiplication μ_A and the comultiplication of deconcatenation Δ. The derivation d along ϕ induced by d and ϕ can be computed as d = ϕ⋆ d ⋆ϕ.The coalgebra morphism ϕ coinduced by ϕ : C^+ → V is computed as ϕ = ∑_n ∈ϕ^⋆ n, the geometric serie using the convolution product ⋆ with respect to the multiplication of concatenation μ and the comultiplication Δ_C. The coderivation d along ϕ coinduced by d and ϕ can be computed as d = ϕ⋆ d ⋆ϕ.Moreover, ( V,μ,Δ_sh,,ε) is a bialgebra, Δ_sh being the morphism of associative algebras Δ_sh : ( V,μ) → ( V ⊗ V,μ^[2]) induced by _V ⊗ + ⊗_V.Also, ( V,μ_sh,Δ,,ε) is a bialgebra, μ_sh being the morphism of coassociative coalgebras μ_sh : ( V ⊗ V,Δ^[2]) → ( V,Δ) coinduced by _V ⊗ε + ε⊗_V.The shuffle product can also be seen as the commutative product resulting on the quotient V =V/(x ⊗ y - (-1)^|x| |y| y ⊗ x, x,y ∈ V). Since Δ_sh is cocommutative, it factors through the quotient, and thus ( V,μ_sh,Δ_sh,,ε) is also a bialgebra.§.§ Toolbox on operations In this section, we present some relations between product, composition, convolution and counit which will be used later. Let α : V → V (or α_i) be a linear map.Since V ⊗≅ V, we have that μ_sh∘ (α⊗ε) = α⊗ε are the same map from V ⊗ V → V since they send a ⊗ b ↦α(a)ε(b).Let ϕ :V ⊗ V → V be a coalgebra morphism, meaning that Δ∘ϕ = (ϕ⊗ϕ) ∘Δ^[2], with Δ^[2] = (id ⊗τ⊗ id) ∘ (Δ⊗Δ), that is Δ^[2] = perm∘ (Δ⊗Δ), the coproduct on each factor followed by a permutation so that the morphism is applied to the right elements. We have(α_1 ⋆α_2) ∘ϕ = μ_sh∘ (α_1 ⊗α_2) ∘Δ∘ϕ= μ_sh∘ (α_1 ⊗α_2) ∘ (ϕ⊗ϕ) ∘Δ^[2]= μ_sh∘ (α_1 ∘ϕ⊗α_2 ∘ϕ) ∘Δ^[2] = (α_1 ∘ϕ) ⋆_2 (α_2 ∘ϕ),with the convolution product _⋆_2 _ = μ_sh∘ (_⊗_)∘Δ^[2]. The property of coalgebra morphism also reads Δ^(n-1)∘ϕ = ϕ^⊗ n∘Δ^[n], where Δ^(n-1) :V → V^⊗ n is the (n-1)-fold of the associative coproduct, and Δ^[n] = perm∘ (Δ^(n-1)⊗Δ^(n-1)). We have(α_1 ⋆…⋆α_n) ∘ϕ = (α_1 ∘ϕ) ⋆_n …⋆_n (α_n ⊗ϕ) with_⋆_…_⋆_ = μ_sh∘ (_⊗_…_⊗_) ∘Δ^(n-1) _⋆_n_…_⋆_n_ = μ_sh∘ (_⊗_…_⊗_) ∘Δ^[n]and we will write collectivelyfor those second kind of convolutions. Taking α_i = α and summing the previous equalities givese^⋆α∘ϕ = e^(α∘ϕ). Using this with ϕ = id ⊗ε, which indeed is a coalgebra morphism, we obtain(α_1 ⋆α_2) ⊗ε = (α_1 ⋆α_2) ∘ϕ = (α_1 ∘ϕ)(α_2 ∘ϕ) = (α_1 ⊗ε)(α_2 ⊗ε) and thus e^⋆α⊗ε = e^(α⊗ε). § EULERIAN IDEMPOTENTSFollowing Loday and Vallette <cit.> and <cit.>, we define the eulerian idempotents on the commutative Hopf algebra ( V,μ_sh,Δ,,ε). We consider its convolution algebra (( V, V),⋆,ε), where the convolution product is _⋆_ = μ_sh∘ (_⊗_)∘Δ. We write id = ε + J so that J is the identity on V^⊗ n except for n = 0 on which it is 0. We definee^(1)log^⋆(id) = log^⋆(ε + J) = ∑_n ⩾ 1 (-1)^n+1J^⋆ n/n In weight n we get that e^(1) : V^⊗ n→ V^⊗ n is given by e^(1)(x_1 … x_n) = e_n^(1)· (x_1 … x_n) for some uniquely defined element e_n^(1)∈[𝔖_n]. These elements are called the first eulerian idempotents. For i ⩾ 1, we definee^(i)(e^(1))^⋆ i/i! Loday shows <cit.> that the elements e_n^(i)∈[𝔖_n] are orthogonal idempotents.In low dimensions, the eulerian idempotents aren=1 e_1^(1) = id; n=2 e_2^(1) = 1/2(id + (1 2)),e_2^(2) = 1/2(id - (1 2)); n=3 e_3^(1) = 1/3id - 1/6((1 2 3)+(1 3 2)-(1 2)-(2 3)) - 1/3(1 3),e_3^(2) = 1/2(id+(1 3)),e_3^(3) = 1/6(id+(1 2 3)+(1 3 2)-(1 2)-(2 3)-(1 3)).The first eulerian idempotent e^(1) is a derivation for μ_sh along ε,e^(1)∘μ_sh = μ_sh∘ (e^(1)⊗ε + ε⊗ e^(1)). Writing μ_sh = id ∘μ_sh = e^⋆ e^(1)∘μ_sh = e^(e^(1)∘μ_sh), we will show that e^(e^(1)⊗ε + ε⊗ e^(1)) = e^(e^(1)∘μ_sh), which gives the result since e^(1)⊗ε + ε⊗ e^(1) = μ_sh∘ (e^(1)⊗ε + ε⊗ e^(1)).We have e^(A+B) = e^ A e^ B, providing AB = BA. Set A =α⊗ε and B = ε⊗α, with α a linear map. Let a,b ∈ V, we use Sweedler notation Δ(a) = ∑_(a) a^(1)⊗ a^(2). Note that |a| = |a^(1)| + |a^(2)| for any elements a^(1),a^(2) of the sum. We have( α⊗εε⊗α)(a ⊗ b)= μ_sh∘ (α⊗ε⊗ε⊗α) ∘Δ^[2](a ⊗ b) = μ_sh(∑_(a),(b) (-1)^|a^(2)| |b^(1)|α(a^(1)) ε(b^(1)) ⊗ε(a^(2)) α(b^(2)) ) = μ_sh(∑_(a),(b)α(a^(1)ε(a^(2))) ⊗α(b^(2)ε(b^(1))) )= α(a) ∙α(b)since terms with elements a^(2) or b^(1) of degree different from zero are killed by the counity; and since |a^(2)| = |a| - |a^(1)|, |b^(1)| = |b| - |b^(2)|,( ε⊗αα⊗ε)(a ⊗ b)= μ_sh∘ (ε⊗α⊗α⊗ε) ∘Δ^[2](a ⊗ b) = μ_sh(∑_(a),(b) (-1)^|a^(2)| |b^(1)|ε(a^(1)) α(b^(1)) ⊗α(a^(2)) ε(b^(2)) ) = (-1)^|a| |b|α(b) ∙α(a) = α(a) ∙α(b)Taking α = e^(1), we thus havee^(e^(1)⊗ε + ε⊗ e^(1)) = e^(e^(1)⊗ε) e^(ε⊗ e^(1)) = ( e^⋆ e^(1)⊗ε) ( ε⊗ e^⋆ e^(1)) = (id ⊗ε)(ε⊗ id) = μ_sh∘ (id ⊗ε⊗ε⊗ id) ∘Δ^[2] = μ_sh∘ (id ⊗ id) = μ_sh = e^(e^(1)∘μ_sh).This proposition implies the equivalence of the two original definitions of Harrison (co)homology of a commutative algebra A as given by Harrison <cit.> and Barr <cit.>. The complexes (A) = (A)/I^∙ 2 and e^(1)(A) are isomorphic.Let a,b ∈(A). Since e^(1) is a derivation of μ_sh along ε, we havee^(1)(a ∙ b) = e^(1)(a) ε(b) + ε(a) e^(1)(b).If a,b ∈ I are elements in the ideal of augmentation, then ε(a) = 0 = ε(b), thus e^(1)(a ∙ b) = 0, hence I ∙ I ⊂(e^(1)). Also (Sweedler's summations implied)e^(1)(a) = ∑_n ⩾ 1 (-1)^n+1id^⋆/n(a) = a-1/2 a^(1)∙ a^(2) + 1/3 a^(1)∙ a^(2)∙ a^(3) - …so a-e^(1)(a) ∈ I ∙ I, hence (id - e^(1)) = (e^(1)) ⊂ I ∙ I.So we have the decomposition(A) = (id-e^(1))(A) ⊕ e^(1)(A) = I^∙ 2⊕ e^(1)(A)which gives the result. Barr's proof of <cit.> consists in a construction by induction of a sequence e_n ∈[𝔖_n] of idempotent maps commuting with the Hochschild boundary map and leaving the shuffle products sh_p,q invariant. Here this proof use the property of the whole map e^(1) of being a graded derivation. Note that e^(1)_n = id - e_n in Barr's notation. In particular, this gives two descriptions of Harrison cochains:(A,M) = ((A)/I^∙ 2,M) = (e^(1)(A),M)they can be viewed as maps A → A that cancel on shuffles, or invariant by the first eulerian idempotent. The second Harrison module ^2(A,M) = {f : A^⊗ 2→ M, f(a,b)=f(b,a)} consists of symmetric maps. § COMMUTATIVE DEFORMATIONSLet (A,μ_0) be a commutative -algebra. In <cit.>, Frønsdal defines commutative deformations. A formal, abelian *-product on A is a commutative, associative product on the space A[[λ]] of formal power series in the formal parameter λ with coefficients in A, given by formal seriesf * g = ∑_n ∈λ^n μ_n(f,g). Associativity for * is the condition (f * g) * h = f * (g * h) or equivalently A_n(f,g,h) = 0 for all n ∈, where A_n is the associator of order n for *A_n(f,g,h) ∑_k=0^n (μ_k(μ_n-k(f,g),h) - μ_k(f,μ_n-k(g,h))). For any product μ, its associator A(a,b,c) μ(μ(a,b),c)-μ(a,μ(b,c)) satisfies0 = [μ,A]_G(a,b,c,d) = μ(A(a,b,c),d) + μ(a,A(b,c,d)) - A(μ(a,b),c,d) + A(a,μ(b,c),d) - A(a,b,μ(c,d))with [ , ]_G the Gerstenhaber bracket. Let A = ∑_n ∈λ^n A_n be the associator of a *-product *. Suppose that * is associative to order r ⩾ 1, A_0 = … = A_r = 0. Equation (<ref>) at order λ^r+1 reads 0 = (δ A_r+1)(a,b,c,d), with δ the Hochschild coboundary, hence A_r+1 is a Hochschild 3-cocycle. Moreover A_r+1 = -δμ_r+1 + A_r+1', where A_r+1' is A_r+1 without the first and last term in the sum. This shows that A_r+1' is also a 3-cocycle, and A_r+1 = 0 ⇔ A_r+1' = δμ_r+1, so that * is associative to order r+1 is equivalent to A_r+1' being a 3-coboundary. This proves that the obstruction to promote associativity from order r to order r+1 are in ^3(A,A). Moreover, if μ_i, 1 ⩽ i ⩽ r are symmetric, then a direct computation shows that A_r+1' is invariant by e^(1)_3, so the obstructions to extend a formal abelian *-product to higher orders are more precisely in ^3(A,A). Barr showed that Harrison cohomology is included in Hochschild cohomology, but it is already the case for the complexes as differential graded Lie algebras.The Harrison complex of cochains ((A,A)[1],δ,[ , ]_G) is a differential graded sub-Lie algebra of the Hochschild complex ((A,A)[1],δ,[ , ]_G). We first prove the following lemma, see also <cit.>. Cochains of (A,A) induces derivations of ( A,μ_sh). Let d : A^⊗ k→ A be a cochain in ^k(A,A) ⊂^k(A,A). It induces d, coderivation of ( A,Δ). We want to show that it is also a derivation for μ_sh,d∘μ_sh = μ_sh∘ (d⊗ id + id ⊗d).Since μ_sh : ( A ⊗ A, Δ^[2]) → ( A,Δ) is a coalgebra morphism, both sides of the equation are coderivation from ( A ⊗ A,Δ^[2]) to ( A,Δ) along μ_sh.Projecting on A, we have on the left-hand side d ∘μ_sh(a ⊗ b) = d(a ∙ b) and on the right-hand side (_A ⊗ε + ε⊗_A) ∘ (d⊗ id + id ⊗d)(a ⊗ b) = (d ⊗ε + ε⊗ d)(a ⊗ b) = d(a) ε(b) + ε(a) d(b) because ε∘d = 0. But since d vanishes on I^∙ 2 and ε on I, the two expressions are equal for all a,b ∈ A. Since the left and right-hand side are coderivations along μ_sh having the same projection, they must be equal by unicity.Let f,g ∈(A,A)[1]. Using the previous lemma, we have[f,g]_G ∘μ_sh = f ∘g∘μ_sh + (-1)^|f| |g| g ∘f∘μ_sh= f ∘μ_sh∘ (g⊗ id + id ⊗g) + (-1)^|f| |g| g ∘μ_sh∘ (f⊗ id + id ⊗f) thus [f,g]_G(a ∙ b)= f (g(a) ∙ b ± a ∙g(b) ) + (-1)^|f| |g| g ( f(a) ∙ b ± a ∙f(b) ) = 0hence the vanishing of f and g on I^∙ 2 imply the one of [f,g]_G on I^∙ 2, so (A,A)[1] is closed for the Gerstenhaber bracket. We recall Kontsevich's notion of formality. For better readability, we note = (A,A) and = (A,A) the Hochschild complex and cohomology of A.The complexis called formal if there is a L_∞-quasi-isomorphism Φ : ([2]) →([2]) (morphism of differential graded coalgebras of degree 0),(Φ⊗Φ) ∘Δ_[2] = Δ_[2]∘Φandb+D∘Φ = Φ∘d,such that the restriction Φ_1 of Φ to H[2] is a section. The map Φ is called a formality map. Here b[μ_0, ]_G is the same as the Hochschild coboundary δ up to a global sign. We recall that the projection of the Gerstenhaber bracket gives a graded Lie bracket [ , ]_s on the shifted cohomology space [1]. The maps D[ , ]_G[1] and d[ , ]_s[1] denote the shifted brackets, which are symmetric; D and d are the induced coderivations on ([2]) and ([2]).By extension, we will say that an associative algebra A is formal if it is the case for its Hochschild complex . For a commutative algebra A, we keep the same definition of formality, but now taking = (A,A) and = (A,A) the Harrison complex and cohomology of A. (commutative) formality ⇒ (commutative) déformationFor associative algebras, the result goes back to Kontsevich <cit.>, with the given framework, it adapts well to commutative algebras. We follow here the presentation of <cit.>. Let π∈^2[[λ]] = [2]^0[[λ]]. We want to construct a formal associative (commutative) deformation μ=μ_0+μ_* where μ_* ∑_r=1^∞λ^r μ_r such that the cohomology class [μ_1] of μ_1 is equal to π. A necessary condition for this is[π,π]_s = 0so we suppose the chosen element π satisfies it. Consider ([2])[[λ]] and ([2])[[λ]] as topological bialgebras (with respect to the λ-adic topology) with the canonical extension of all the structure maps. Note that the tensor product is no longer algebraic, but given by (([2]) ⊗([2]))[[λ]]. For a general graded vector space V it can be easily seen that the group-like elements of V[[λ]] are no longer exclusively given by , but by exponential functions of any primitive elements of degree zero, they take the form e^∙λ v with v∈ V^0[[λ]]. The image Φ(e^∙λπ) of the grouplike element e^∙λπ in ([2])[[λ]] under the formality map Φ is a grouplike element in ([2])[[λ]] and thus takes the form e^∙μ_* with μ_* ∈λ^2[[λ]]. Since [π,π]_s = 0 it follows that d(e^∙λπ)=0, and therefore (b+D)(e^∙λμ_*) = 0. Projecting this last equation to [2]^0[[λ]] = ^2[[λ]], we get the Maurer-Cartan Equation0 = bμ_* + 1/2 [μ_*,μ_*]_G = 1/2 [μ_0 + μ_*,μ_0 + μ_*]_G,showing the associativity of μ = μ_0 + μ_*. Hence μμ_0 + μ_* is a formal associative deformation of the algebra (A,μ_0). In the commutative case with = (A,A), μ_* ∈λ^2[[λ]] is equivalent to the commutativity of μ_i for i⩾ 1, so the resulting product μ is commutative.amsalpha*
http://arxiv.org/abs/1702.08250v1
{ "authors": [ "Olivier Elchinger" ], "categories": [ "math.QA", "13D03 (Primary) 13D10, 16T10 (Secondary)" ], "primary_category": "math.QA", "published": "20170227120739", "title": "A formality framework for commutative deformations" }
Three-particle correlations in relativistic heavy ion collisions in a multiphase transport model Che Ming Ko December 30, 2023 ================================================================================================ For many internet businesses, presenting a given list of items in an order that maximizes a certain metric of interest (e.g., click-through-rate, average engagement time etc.) is crucial. We approach the aforementioned task from a learning-to-rank perspective which reveals a new problem setup.In traditional learning-to-rank literature, it is implicitly assumed that during the training data generation one has access to the best or desired order for the given list of items.In this work, we consider a problem setup where we do not observe the desired ranking. We present two novel solutions: the first solution is an extension of already existing listwise learning-to-rank technique–Listwise maximum likelihood estimation (ListMLE)–while the second one is a generic machine learning based framework that tackles the problem in its entire generality. We discuss several challenges associated with this generic framework, andpropose a simple item-payoff and positional-gain model that addresses these challenges. We provide training algorithms, inference procedures, and demonstrate the effectiveness of the two approaches over traditional ListMLE on synthetic as well as on real-life setting of ranking news articles for increased dwell time. § INTRODUCTIONRecommending items that matches users interests lies at the core of many online businesses and has been an active area of research; over the years, many techniques have been developed for these tasks including matrix completion based collaborative filtering <cit.>, factorization machines <cit.>, etc. The central theme of these techniques is that they utilize the historical data of user-engagement to predict user's interest or rating for the new items. These items are then presented to the users in the decreasing order of the predicted rating/score. It is implicitly assumed that the decreasing order of predicted score is the best order to show the items to the users. However, our real life experience suggests that in many scenarios user satisfaction is driven not just by the quality of items but also by the order in which they are presented to the users. In scenarios where the intention is better long-term user-engagement or revenue per user, once the most relevant set of items to be shown to the user are identified, the important task is to show these items in an order that maximizes a particular metric of interest, such as average time spent by users per session (each session is a ordered list of items), or total click through rate, etc.The problem setup in such scenarios can be abstractly represented as in Figure <ref>, where the list of n items denoted by a feature matrix= [_1,⋯, _n] ∈^d × n (_i ∈^d is the i^th column that is the feature vector of the i^th item in the list), is shown to a user in the input order Π∈_n (where _n is the set of all permutations of integers {1,⋯,n}). The user assesses the quality of theitems and the input order pair, (,Π), and assignsa score s ∈^+ that is a measure of desired metric for (,Π). This results in a training example {(,Π) , s }. Depending on the specific setting, the user may assign the score explicitly or it may be calculated based on user interaction statistics, for example, in terms of clicks (or no clicks) or the average user engagement times. Using the training examples collected in this manner, the ultimate goal is to predict an order for a new unseen list of items so that the score is maximized. A similar looking problem is the focus point of various techniques developed in learning-to-rank literature but our problem setup is different as it violates an implicit assumption prevalent in learning-to-rank literature: the assumption that best or desired order is provided with training data however in our problem setup we do not have such data.Our main contributions are two machine learning based solutions for the proposed problem setup. The first solution, weighted ListMLE, builds upon a popular listwise learning-to-rank technique ListMLE <cit.> by incorporating the weights proportional to the scores.The second solution is a general machine learning framework in which we address the problem at hand in its entire generality. In this we first learn a mapping to predict the score for a given list of items and an order. The final order is obtained by maximizing the predicted score. We reveal several challenges associated with this approach and propose a simple item-payoff and positional-gain model that addresses these specific challenges. We also present an alternating-minimization based training algorithm and demonstrate the effectiveness of the proposed techniques on simulated as well as real datasets.§.§ Related worksThe problem setup we consider lies at the nexus of recommendation and ranking systems. It arises in the context of recommendation systems and is motivated by the learning-to-rank approaches. However, our problem setup is quite different from these traditional settings. As discussed earlier, typically in recommendation systems the items are presented in decreasing order of the predicted ratings. Recommendation techniques that give preference to diversity <cit.> or the multi-criteria recommender systems <cit.> often deviate from “decreasing order of predicted rating" ranking of items. But even in these systems, the input order is not explicitly modeled as considered in this paper. Our problem is also related to the learning-to-rank literature. Traditionally, learning-to-rank problems are motivated from a search engine perspective, where the task of is to show the results in decreasing order of relevance to the user's query. The main goal is to minimize the user's search time. The abstract problem setup arising in learning-to-rank literature is shown in Figure <ref>, where the user is shown the list of items 𝐗 and the user assigns relevance scores {s_i}_i=1^n to each item for the given query and provides best order based on decreasing order of relevance scores. In this manner training data comprising of lists of items and the final order can be collected. In some cases, however, access to the individual relevance scores is not necessary and the desired order can be inferred using other techniques <cit.>.Variety of machine learning based learn-to-rank algorithms have been developed that use this training data to predict order for a new list of items. The main challenge in applying machine learning to rank list of items is the combinatorial nature of the output domain of the mapping. Existing techniques use different ways to deal with this challenge and can be broadly classified into three main categories: pointwise, pairwise, and listwise ranking <cit.>.The pointwise approaches reduce the problem of ranking to regression tasks. They ignore the combinatorial output domain, and focus on predicting the relevance score of each item separately. Some of the important pointwise techniques are proposed in <cit.> among many others. The pairwise approaches on the other hand reduce the learning-to-rank problem to a classification problem by using pairwise comparisons to transform the order into binary labels. Few notable pairwise approaches among many others include support vector machine (SVM) based approach <cit.>, perceptron based approach <cit.> and neural networks based approach <cit.>. Listwise approaches take an entire list of items as input and directly tackle the combinatorial nature of output domain. Due to this, the listwise approaches are known to perform better than the pointwise and pairwise approaches. These are generally based on probabilistic modeling of various orders for the given list of items. Some notable works in listwise learning-to-rank are <cit.>. The pursuit to minimize the loss functions defined permutation spaces has lead to several listwise learning-to-rank techniques including LambdaRank <cit.> and several other followup works <cit.>. A distinguishing characteristic of traditional learning-to-rank problem setup is that the relevance of items to a query is a property of the items and does not change with the order in which the items are shown to the user. This is the main difference between our problem setup and existing learning-to-rank setup. We have the notion of input order whereas no such notion exists in problems discussed in learning-to-rankliterature. Also, in learning-to-rank literature it is implicitly assumed that during the training data generation one has access to the best or desired order for the given list of items. The existing learning-to-rank techniques mainly focus on predicting this order in various ways. In our problem setup we do not have access to the best or desired order for the items in a list. Due to these reasonstraditional learning-to-rank approaches are incapable of handling our problem setting. §.§ OrganizationFollowed by brief discussion on notation in section <ref>, we discuss the problem formulation in section <ref>. Section <ref> we present our first solution the weighted ListMLE. The second more general approach is proposed in section<ref> and the section <ref> describes the item-payoff andpositional-gain model. The section <ref> provides experiments to show the efficacy of the proposed approach. Finally, section <ref> concludes the paper with a brief discussion on future directions.§ NOTATIONVectors and matrices are denoted by bold-face lowercase and uppercase characters, respectively.A list of size n is represented by the matrix 𝐗 = [𝐱_1, ⋯, 𝐱_n] ∈^d × n whose i^ th column 𝐱_i ∈^d is the feature vector of i^ th item in the list.Vectors of all ones and zeros of size n are denoted by 1_n and 0_n respectively. An identity matrix of size n × n is denoted by _n.The set of all permutations of integers {1,⋯,n} is denoted by 𝒫_n. A particular permutation is denoted by Π = [π_1, ⋯, π_n] ∈𝒫_n where π_i denotes the position where the i^th itemin the list is placed. For example, π_2 = 1 implies that the second item is placed at the first position. For a given matrix ∈^d × n and Π∈_n,_Π denotes a matrix whose columns are obtained by re-ordering columns ofas per Π. The function sort[x_1,⋯,x_n] returns the permutation denoting the positions of x_i's if they were placed in descending order. § PROBLEM FORMULATION As discussed earlier, using the problem setup as shown in Figure <ref>, training data comprising of list of n items , the order in which it is shown Π∈_n, and the user assigned score s ∈^+ can be collected. Note that there is one single score for the entire list of items. We assume there is a probability distribution P_,Π,s over ^ d × n ×_n ×^+ from which we are given m i.i.d training examples as follows_N = {{(^(i), Π^(i)), s^(i)}_i=1^N},where ^(i)∈^d × n denotes the i^th list of items, Π^(i)∈_n is the order in which the items were shown to the user, and s^(i) is the corresponding score of the list. The goal is to use the training data to learn an ordering for the new list of items such that it maximizes the score. In light of the available training data, addressing this goal is particularly challenging because we do not have access to the order that maximizes score. Next we describe two approaches designed towards to achieve this goal. § APPROACH 1: WEIGHTED LISTMLEThe main challenge in addressing the problem of ordering a list of items using the training data _N lies in the discrete combinatorial nature of input and output domains. As discussed earlier, the listwise learning-to-rank techniques have effectively addressed this challenge in a related but different setting. Our first approach builds upon a existing popular techniqueListMLE and extends it so that to our problem setting. We first briefly describe the ListMLE technique followed by details of our proposed extension to it.§.§ ListMLEThe ListMLE approach is based on modeling the conditional probabilities of various permutations given the list of items <cit.>. Specifically, the conditional probability of a permutation Π∈𝒫_n given the list of items 𝐗∈ℝ^d × n is modeled by so called Plackett-Luce model as followsP(Π | 𝐗; g) =∏_j=1^ne^ g(𝐱_π_j) /∑_k=j^n e^ g(𝐱_π_k) ,where g(·): ℝ^d →ℝ computes the score of each item and 𝐱_π_k denotes the feature vector ofπ_k^th item in the list. Using training data _N the ListMLE entails solving the following maximum likelihood problem min_g∑_i=1^N - log( P(Π^(i) | 𝐗^(i); g) ).Note that in ListMLE, it is assumed that the output permutation Π is the desired permutation and the goal is to learn a mapping from the feature space to this output space. The learned mapping ĝ—thesolution of problem (<ref>)—is used to predict the order for a new list of items. For a new list of items 𝐗∈ℝ^d × n, first the predicted relevance scores {ĝ(𝐱_j)}_j=1^n are computed. These scores are then used to calculate the probabilities of various permutations using (<ref>). The inference procedure involves finding the maximum probability permutation, which can be efficiently implemented owing to the Plackett-Luce model in (<ref>) by sorting the predicted scores {ĝ(𝐱_j)}_j=1^n. Next, we present our approach which extends ListMLE to our problem setting. §.§ Weighted ListMLEAs discussed earlier, our problem setup has a notion of input order. From equation (<ref>), it is clear that ListMLEallows only one order per list that is assumed to be the best order in some sense. The input order in our approach may not necessarily be the best order as required by ListMLE since we want to figure out that out of all the possible permutations of the items which order corresponds to the best score. We propose weighted ListMLE to address this specific problem setting. Similar to ListMLE, we model the conditional probability of a order Π∈𝒫_n given the input list 𝐗∈ℝ^d × n by Plackett-Luce model in equation (<ref>). As our aim is to predict the order that maximizes the given metric, we weight the likelihood term for the given list of items and the order in which they were presented by the corresponding score s. Specifically, for the training data _N, the weighted ListMLE involves solving the following weighted maximum likelihood problem min_g∑_i=1^N - s^(i)log( P(Π^(i) | 𝐗^(i); g) ).In the above problem, the weights s^(i) bias the learning process such that the orders with higher score are given higher probabilities. The algorithm for solving the training problem (<ref>) can be shown to be a simple modification to existing training algorithm for ListMLE proposed in <cit.> by adding weights to the gradient computation.After learning the scoring function ĝ by solving the problem in (<ref>), it is used to find the permutation Π for a new list 𝐗 as follows Π̂(𝐗) = max_Π∈𝒫_n P(Π | 𝐗; ĝ).Again, owing to the special structure of Plackett-Luce model, the Π̂(𝐗) can be obtained by simply sorting {e^ĝ(𝐱_j) }_j=1^n in descending order as followsΠ̂(𝐗) = sort[ exp(ĝ(𝐱_1)) ,⋯, exp(ĝ(𝐱_n))].The weighted ListMLE can be reduced to ListMLE if the input order Π^(i) is chosen such that it is based on decreasing order of relevance of the items, i.e. the best order, and the corresponding scores is fixed to be constant (say s^(i) = 1 for all i). We extend existing ListMLE in a sense that the notion of input order can be accommodated. Weighted ListMLE can be construed as an attempt to extend the existing learning-to-rank to our setting while keeping the essential characteristics of ListMLE intact. Next, we present a more direct approach that handles our problem setup in more generality. § APPROACH 2: A MACHINE LEARNING BASED FRAMEWORKOur ultimate goal it to learn a mapping that maximizes the score for the given list of items. Using machine learning techniques to learn such mapping would require training data in terms of list of items and the score maximizing order and existing learning-to-rank techniques can be applied. But the training data in our problem setup does not have this form that makes developing a machine learning approach to solve this problem challenging. However, using the training data _N in the form it is available to us it is possible to learn a mapping from (^d × n, _n ) to ^+ because the training data can be considered as noisy observations of such a mapping. Accordingly, we follow a two step approach: (1) learn a mapping that predicts score for the given list of items and order, (2) use the learned mapping to obtain the final order by maximizing the predicted order.We learn the mapping f:(^d × n, _n ) →^+ by solving the following empirical risk minimization problem f̂__N = min_ f ∈ ∑_i=1^N(s^(i) - f( ^(i), Π^(i)) )^2,whereis the set of functions defined from (^d × n, _n ) to ^+. For anew list of items ∈^d × n we infer the score maximizing ordering using the learned f̂__N in (<ref>) as followsΠ̂() = max_Π∈_nf̂__N( , Π).Above approach draws some parallel from multi-class classification problems where the training data is first used to accurately predict the probability of various classes, and the classifiers output is obtained by maximizing the predicted probability. Here, we first use the training data to fit a function that accurately predict the score for given list of items and input order, and then, use the learned mapping to predict the final order by maximizing the predicted score. The choice of function classis critical to the feasibility of the approach described above as it involves combinatorial input and output domain. Next we discuss various issues that govern the choice of function class . Choosing the function class : As the inference problem in (<ref>) involves optimization over set of permutations _n, its computational complexity is (n!) = (n^n) making it computationally prohibitive even for modest values of n.Therefore, the first requirement on the function classis to make the corresponding inference problem in (<ref>) feasible. In addition to the inference complexity, note that for the fixed value of list of items , the function f(, Π) can take n! different values by choosing different Π∈_n. Therefore, the second requirement on the function classis that it should prevent over-fitting and the estimate f̂__N should have reasonable variance with practically feasible number of training data points. Both these requirements can be handled if the function classis simple. For these purposes, we propose the class of functions that can be decomposed as follows f(,Π) = ∑_i=1^n h( _i, π_i),where h ∈ andis some class of functions defined from ( ^d, {1,⋯,n}) to ^+. The specific structure considered in (<ref>) is simple because the overall score predicted by these functions depends on the item feature vector and the locations it appears in Π. Further, note that these functions still take n! values for a givenby choosing different Π∈_n. However, each of these values is sum of some n entries chosen from a scoring matrix _h() defined as_h() = [ h(_1,1) ⋯ h(_1,n); ⋮ ⋯ ⋮; h(_n,1) ⋯ h(_n,n) ].For a given order Π, all the terms in the summation in (<ref>) can be obtained from the entries of the scoring matrix _h(). This essentially implies that the functions following the decomposition in (<ref>) have inherent low dimensional structure.Training with : The training with thein (<ref>), the empirical risk minimization problem in (<ref>) reduces to ĥ__N = min_ h ∈ ∑_i=1^N(s^(i) -∑_j=1^n h(_j^(i),π_j^(i)))^2.The actual complexity of above training problem will depend on the specific choice of the function class . This issue will be discussed in greater details later in this paper when we consider a specific example of .Inference with : The inference problem in (<ref>) reduces to the following problemΠ̂() = max_Π∈_n∑_i=1^n ĥ__N( _i, π_i ).Further, observing that Π is a valid permutation, i.e., at one location only one item is placed, we do a change of variable from permutation Π∈_n to a permutation matrix ∈^n × n. The permutation matrixis such that its entries are either 1 or 0 and there is exactly one non-zero entry in each column and row. The rows ofcan be obtained for a given Π in such a manner that if the i^th item goes to j^th locationP_i,j = 1. This implies that there is one to one mapping from Π toand the objective in problem (<ref>) can be written in terms ofas follows ∑_i=1^n ĥ__N( _i, π_i ) = ∑_i=1^n ∑_j=1^n P_ijĥ__N( _i, j ).Next we use the notion of scoring matrix introduced in (<ref>) and we introduce analogous scoring matrix _ĥ__N() whose (i,j)^th entry is ĥ__N( _i, j ). With this the problem the inference problem in (<ref>) can be converted to an equivalent problem as follows∈^n × nminTr(𝐏_ĥ__N()) subject to ∑_i=1^n P_ij = 1,∀ j,∑_j=1^n P_ij = 1 ∀ i, P_ij∈{0,1} ∀ i,j,where Tr( · ) represents sum of diagonal entries of a matrix. Problem (<ref>) is an instance of the classical linear sum assignment problem that due to the total unimodularity of the constraints can be efficiently solved by relaxing it to the following linear program <cit.>∈^n × nminTr(𝐏_ĥ__N()) subject to ∑_i=1^n P_ij = 1,∀ j,∑_j=1^n P_ij = 1 ∀ i, P_ij≥ 0∀ i,j.Above inference problem is a linear program of n^2 variables that can be solved in polynomial time as compared to the original inference problem in (<ref>) whose complexity without our choice of simpler function classcould be (n^n) in worst case. Recently, a fast greedy algorithm with provable 1/2-optimal solution with a worst-case runtime of (n^2) was used in <cit.> for online constrained ranking problems. § AN INSTANCE OF : THE ITEM-PAYOFF AND POSITIONAL-GAIN MODELHere we propose a specific instance of the classthat follows the decomposition in (<ref>). The proposed model utilizes the notions of positional-gains and item-payoffs. For the given list of items , the item-payoff vector whose i^th entry denotes the payoff associated with i^th item is modeled as followsexp( ^T^* ) =[ exp( 𝐱_1^T𝐯^* ), ⋯ ,exp( 𝐱_n^T𝐯^* ) ]^Twhere 𝐯^* ∈^d is a fixed ground truth weight vector.The positional-gain is the property of position and it is defined by the gain vector 𝐠^* ∈ℝ^n whosei^th component g_i denotes the gain associated with i^th position. With this for the given list of items 𝐗 and order Π∈𝒫_n, the score is calculated as follows f(Π,𝐗) = (^*)^T exp( ^T_Π^* ),where _Π is the matrix whose columns are obtained by ordering columns ofas per Π. The function in (<ref>) is an instance of the function class defined in (<ref>) with h(_i,π_i) = g_π_i^*exp(_i^T^*). A similar model was proposed in <cit.> in context of explore and exploit in top-N recommender systems however it focused on modeling the item relevance under the assumption that first position is more important than second and so on. In contrast, we do not have such an assumption in our problem setup. Training: As h(_i,π_i) =g_π_i^*exp(_i^T^*), the function classis parametrized by the positional-gain vectorand the weight vector . With this the empirical risk minimization problem in (<ref>) reduces tomin_∈^d, ∈^n∑_i=1^N ( s^(i) - exp( ^T ^(i)_Π^(i)))^2.The above problem suffers from scaling ambiguity due the product term exp( ^T ^(i)_Π^(i)). In addition, this term increases exponentially with scaling ofwhich results in numerical overflow issues. For these purposes instead of solving problem (<ref>) we solve the following modified problem for training ∈^d, ∈^nmin∑_i=1^N ( s^(i) - exp( ^T ^(i)_Π^(i)))^2 +λ_2^2 subject to_2 ≤ 1.where λ>0 is a regularization parameter. Even though we have addressed the issue of scaling ambiguity the problem in (<ref>) is still jointly non-convex inand . However, for a fixedthe problem is convex in and similarly, for a fixedas well the problem is convex in . Based on this we propose an alternating minimization based algorithm for approximately solving problem (<ref>). Starting with initial ^(0) = 1_n/√(n) we alternatively minimize with respect toanduntil convergence. The final procedure is detailed in Algorithm <ref>.Theupdate step in this algorithm is a standard ℓ_2 regularized least squares problem which can solved in closed form and theupdate step is a constrained convex program which can be solved by a projected gradient descent approach shown in Algorithm <ref>.Inference: After obtaining , from Algorithm <ref>, they can be used to obtain an estimate for the score for the given list of itemsand input order Π as followsf̂(Π,) = ( )^T exp(_Π^T ).For inferring the order that maximizes the predicted score we first use the fact that for the item-payoff and positional-gain model, h(_i,π_i) = ĝ_π_iexp( _i^T ) and calculate the scoring matrix followed by solving the linear program in (<ref>). However, owing to the linear structure of the positional-gain and item-payoff model the score maximizing order simply corresponds to first sorting the estimated payoffs {exp( _i^T ) }_i=1^n for each item and then putting the item with largest estimated payoff at the position with largest estimated gain and so on. § EXPERIMENTSWe evaluate our approach on the synthetic as well as real data.§.§ Synthetic DataFor the synthetic experiments, we fixed the list size as n=5 and dimensionality of feature vector as d=10. The mean vectors {μ_i }_i=1^n for each item were generated once at start of the experiment such that their components are i.i.d. random variable uniformly distributed in the interval [0,1]. A random list of items is generated such that the feature vector for i^th item in the list follows a multivariate Gaussian distribution 𝒩( μ_i,𝐈_n/10 ). Further, using a randomly generated vector ^* ∈^n generated once at start of the experiment the score for given input order Π and list of itemswas generated as follows s(Π,) = (^*)^T [ exp( ^T_Π^* )/1_n^Texp( ^T_Π^* ) ],where ^* ∈^n is a fixed positional-gain vector. This serves as a ground truth model for calculating the score. Note this score calculation does not exactly follow the item-payoff positional gain model.For training N=1000 lists were randomly generated and input order for each list was chosen uniformly at random from the set 𝒫_n and corresponding score was calculated using (<ref>) to obtain the training data {(( 𝐗^(i), Π^(i)),s^(i)) }_i=1^N.For training weighted ListMLE we fixed the function g() = ^T where ∈^d,and solved (<ref>) to obtain . With linear g() the problem in (<ref>) can be shown to be a convex program that can be efficiently solved by gradient descent algorithm. For a new list the final order was obtained using (<ref>) withĝ()= ^T. For the second approach we used training data along with Algorithm <ref> to obtainand . For a new list these vectors were used to obtain the final order by maximizing the predicted score in equation (<ref>). We compare our approaches toListMLE that requires access to desired order, i.e., the order that maximizes the score in (<ref>). However, this is not possible in above experimental setting. Typical ListMLE would use an order obtained by sorting the per-item relevance scores. Here, we provided the relevance score vector for items in the list as 𝐲^(i) = ( 𝐗^(i))^T 𝐯^* whose were components sorted to obtain the training data as {( 𝐗^(i), Π^(i)_2 = sort(𝐲^(i)))}. Here too, we fixed g() = ^T and solved (3) to obtain . For a new list of itemswas used to predict the final order bypermutation by maximizing the probability in (<ref>). For testing, 500 new lists of items were randomly generated and for each list in the test data, orders predicted by all the approaches were obtained and respective scores were calculated using the ground truth model in (<ref>). We repeated the experiments for three different positional-gain vectors and the average scores with various approaches are shown in Table <ref>. The 1^ st row represents the case when all the positions have same gain, i.e., this implies there is no positional preference and as expected we see that all approaches perform the same. The 2^ nd row represent the case with positional-gain vector is skewed so that only third and fourth position are important whereas the 3^ rd row represents a less skewed position gains.We observe that our approached performs better than ListMLE and the second approach performs the best in both these cases. The superior performance of our approaches can be attributed to the fact that they model input order explicitly. The main reason for experiments on synthetic data was to understand the effectiveness of proposed solutions in an ideal setting where input order effect can be precisely controlled. Using various positional gain vectors we were able to show that empirically our proposed approaches are successful as compared to the traditional ListMLE. We would also like to acknowledge that there may be many other ways ofgenerating the score s(𝐗, Π) but for the purposes of demonstrating the main idea we choose the specific model in (<ref>). Our main goal here is to highlight a setting where the desired order is not available during training and sorting according to the relevance scores may not be the best thing. We note that our main critique is not that of a particular learning-to-rank technique but its problem setting and ListMLE just serves as an popular representative example of that problem setting.Next, we demonstrate the effectiveness of our approaches in a real-life setting. §.§ Real dataFor the experiments with real data we used data from Yahoo! (www.yahoo.com) that is predominantly a news website; the items in this setup are the news articles. Each article can be related to a few content-categories out of a total of 405 categories internally defined by Yahoo!. For instance, an article can have a score of 0.5 towards the category politics along with a score of 0.1 towards entertainment. Association of articles to these categories is part of Yahoo's content understanding platform whose details are out of the scope of this paper. But as a outcome of this content ingestion and understanding pipeline, each article is represented by a feature vector in d = 405 dimensional space. Each user is served a list of articles and the order in which these articles are presented is captured by our training data. The size of list was fixed as n=3. The data was collected using logs obtained over one day of website usage. From the resulting logs, we obtained the list of news articles and their feature representations, the order in which they were presented, and the corresponding dwell time or average time the user spent on the entire list. The metric of interest here is the dwell time.After preprocessing the data we obtained a total of 4950 data points out of which we used 4000 examples for training and rest for testing. We note that the dwell time is affected by the relative position of news articles with respect to each other. In this particular real life application there is no clear notion of per item relevance rather we just have the dwell time which is a function of the list of news articles and the order in which they were shown on the website. This is an example where traditional learning-to-rank approaches are not applicable. We, however, apply the ListMLE where the training data for ListMLE was fixed as the list of news articles and order in which they were shown using the existing ranking mechanism. In this manner ListMLE learns to predict order as per the current ranking system on the news website.We note here that accessing the quality of ordering given by various approaches is a bit tricky because in the test set the dwell times corresponding to all the permutations of given list of items is not available. In other words, we only have partial ground truth order available to us. In order to deal with this, we first found the various orders that available for a the given list of news article in the test data and noted their dwell times. We then ranked these orders for given list of items based on their dwell times. For the same list of news articles we calculated the predicted scores for these orders using item-payoff and positional gain model, and probabilities in case of weighted ListMLE and ListMLE. Finally, the Normalized discounted cumulative gain (NDCG) score between the ranking of orders obtained by decreasing order of dwell time from test data and the ranking of orders obtained by different approaches was calculated. The goal here is to check whether our approaches gives higher score to the orders with higher dwell time.We also calculated the average dwell for the top order (among the orders available in the test data) predicted by all the approaches. The final results are shown in Table <ref>. The reported results averaged over 10 random splitting of data in training and test sets. We observe that our approach performs better than ListMLE in terms of average NDCG and average dwell times for the top-1 order predicted by our approaches.The item-payoff and positional-gain model based approach performs the best. These results show that our approach predicts order that correlates more with the order as per the dwell times. The relative performances of these three approaches can be understood based on how they model the input-order. ListMLE that does not model the input order performs the worst followed by weighted ListMLE that can construed an minor modification to ListMLE whereas the item-payoff and positional-gain approach explicitly models the input-order and performs the best. § CONCLUSION AND FUTURE DIRECTIONSIn this paper, we investigate the problem of ranking list of items to maximize a given metric of interest when the best or desired order is not provided during the training. Following the learning-to-rank basedroute to solve this problem we reveal a new problem setup that is usually not considered in traditional learning-to-rank literature. We proposed two approaches: (1) weighted ListMLE and (2) a generic machine learning framework and item-payoff and positional-gain as an instance of the generic framework. The effectiveness of the proposed approaches was demonstrated on simulated as well as real-life setting of ranking news articles for increased dwell time. Future directions for this work include establishing the sample complexity bounds and generalization guarantees for the proposed approaches. Exploring more complex models than item-payoff and positional-gain model is yet another interesting direction for future research.IEEEbib
http://arxiv.org/abs/1702.07798v1
{ "authors": [ "Swayambhoo Jain", "Akshay Soni", "Nikolay Laptev", "Yashar Mehdad" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20170224232003", "title": "Rank-to-engage: New Listwise Approaches to Maximize Engagement" }
http://arxiv.org/abs/1702.08207v1
{ "authors": [ "Dariusz Dereniowski", "Adrian Kosowski", "Przemyslaw Uznanski", "Mengchuan Zou" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20170227095217", "title": "Approximation Strategies for Generalized Binary Search in Weighted Trees" }
Impurity scattering and size quantization effects in a single graphene nanoflake Mikhail Fonin^1[Email address: mikhail.fonin@uni-konstanz.de] December 30, 2023 ================================================================================ We prove an explicit formula for the arithmetic intersection number of diagonal cycles on GSpin Rapoport–Zink spaces in the minuscule case. This is a local problem arising from the arithmetic Gan–Gross–Prasad conjecture for orthogonal Shimura varieties. Our formula can be viewed as an orthogonal counterpart of the arithmetic-geometric side of the arithmetic fundamental lemma proved by Rapoport–Terstiege–Zhang in the minuscule case. § INTRODUCTION§.§ Motivation The arithmetic Gan–Gross–Prasad conjectures (arithmetic GGP) generalize the celebrated Gross–Zagier formula to higher dimensional Shimura varieties (<cit.>, <cit.>). It is a conjectural identity relating the heights of certain algebraic cycles on Shimura varieties to the central derivative of certain Rankin–Selberg L-functions. Let us briefly recall the rough statement of the conjecture. The diagonal embeddings of unitary groupsH=U (1,n-1)↪ G=U(1, n-1)×U (1, n)or of orthogonal groupsH=(2, n-1)↪ G=(2,n-1)×(2,n),induces an embedding of Shimura varieties _H↪_G. We denote its image by Δ and call it the diagonal cycle or the GGP cycle on _G. Let π be a tempered cuspidal automorphic representation on G appearing in the middle cohomology of _G. LetΔ_π be the (cohomological trivialization) of the π-component of Δ. The arithmetic GGP conjecture asserts that the (conditional) Beilinson–Bloch–Gillet–Soulé height of Δ_π should be given by the central derivative of a certain Rankin-Selberg L-function L(s, π) up to simpler factors⟨Δ_π,Δ_π⟩∼ L'(1/2, π). The Gross–Zagier formula <cit.> and the work of Gross, Kudla, Schoen (<cit.>, <cit.>) can be viewed as the special cases n=1 and n=2 in the orthogonal case correspondingly. The recent work of Yuan–Zhang–Zhang (<cit.>, <cit.>)has proved this conjecture for n=1,2 in the orthogonal case in vast generality.In the unitary case, W. Zhang has proposed an approach for general n using the relative trace formula of Jacquet–Rallis. The relevant arithmetic fundamental lemma relates an arithmetic intersection number of GGP cycles on unitary Rapoport–Zink spaces with a derivative of orbital integrals on general linear groups. The arithmetic fundamental lemma has been verified for n=1,2 (<cit.>) and for general n in the minuscule case by Rapoport–Terstiege–Zhang <cit.>.In the orthogonal case, very little is known currently beyond n=1,2 and no relative trace formula approach has been proposed yet. However it is notable that R. Krishna <cit.> has recently established a relative trace formula for the case (2)×(3) and one can hope that his method will generalize to formulate a relative trace formula approach for general (n-1)×(n).Our goal in this article is to establish an orthogonal counterpart of the arithmetic-geometric side of the arithmetic fundamental lemma in <cit.>, namely to formulate and compute the arithmetic intersection of GGP cycles onRapoport–Zink spaces in the minuscule case. §.§ The main resultsLet p be an odd prime. Let k=𝔽_p, W=W(k), K=W[1/p] and σ∈(W) be the lift of the absolute p-Frobenius on k. Let n≥4. Let V^♭ be a self-dual quadratic space over _p of rank n-1 and let V=V^♭⊕ℤ_p x_n (orthogonal direct sum) be a self-dual quadratic space over _p of rank n, where x_n has norm 1. Associated to the embedding of quadratic spaces V^♭↪ V we have an embedding of algebraic groups G^♭=(V^♭)↪ G=(V) over ℤ_p. After suitably choosing compatible local unramified Shimura–Hodge data (G^♭, b^♭, μ^♭, C(V^♭))↪ (G, b, μ, C(V)), we obtain a closed immersion of the associatedRapoport–Zink spacesδ: ^♭↪. See <ref> for precise definitions and see <ref> for the moduli interpretation of δ. The spaceis an example of Rapoport–Zink spaces of Hodge type, recently constructed by Kim <cit.> and Howard–Pappas <cit.>. It is a formal scheme over W, parameterizing deformations of a p-divisible group 𝕏_0/k with certain crystalline Tate tensors (coming from the defining tensors of G inside some _N). Roughly speaking, if X^♭ is the p-divisible group underlying a point x∈^♭, then the p-divisible group underlying δ(x) ∈ is given by X= X^♭⊕ X^♭.The datum (G,b,μ, C(V)) is chosen such that the spaceprovides a p-adic uniformization of (𝒮_W)_/𝒮_ss, the formal completion of 𝒮_W along 𝒮_ss, where𝒮_W is the base change to W of Kisin's integral model (<cit.>) of aShimura variety (which is of Hodge type) at a good prime p, and 𝒮_ss is the supersingular locus (= the basic locus in this case)of the special fiber of _W (see <cit.>). The group J_b(ℚ_p)={g∈ G(K): gb=bσ(g)} is the ℚ_p-points of an inner form of G and acts onvia its action on the fixed p-divisible group 𝕏_0. Let g∈ J_b(ℚ_p). As explained in <ref>, the intersection of the GGP cycle Δ on ^♭×_W and its g-translate leads to study of the formal schemeδ(^♭)∩^g, where ^g denotes the g-fixed points of . We call g∈ J_b(ℚ_p) regular semisimple ifL(g):=ℤ_px_n+ℤ_p gx_n+ ⋯+ℤ_p g^n-1x_nis a free ℤ_p-module of rank n. Let L(g)^∨ denote the dual lattice of L(g). We further call g minuscule if L(g) ⊂ L(g)^∨ (i.e. the quadratic form restricted to L(g) is valued in _p), and L(g)^∨/L(g) is a 𝔽_p-vector space. See Definition <ref> for equivalent definitions. When g∈ J_b(ℚ_p) is regular semisimple and minuscule, we will show that the formal scheme (<ref>)is in fact a 0-dimensional scheme of characteristic p. Our main theorem is an explicit formula for its arithmetic intersection number(i.e., the total W-length of its local rings).To state the formula, assume g is regular semisimple and minuscule, and suppose ^g is nonempty. Then g stabilizes both L(g)^∨ and L(g) and thus acts on the 𝔽_p-vector space L(g)^∨/L(g). Let P(T)∈𝔽_p[T] be the characteristic polynomial of g acting on L(g)^∨/L(g). For any irreducible polynomial R(T)∈𝔽_p[T], we denote its multiplicity in P(T) by m(R(T)). Moreover, for any polynomial R(T), we define its reciprocal byR^*(T):=T^ R(T)· R(1/T).We say R(T) is self-reciprocal if R(T)=R^*(T). Now we are ready to state our main theorem: Let g∈ J_b(ℚ_p) be regular semisimple and minuscule. Assume ^g is non-empty. Then * (Corollary <ref>) δ(^♭)∩^g is a scheme of characteristic p.* (Theorem <ref>) δ(^♭)∩^g is non-empty if and only if P(T) has a unique self-reciprocal monic irreducible factor Q(T)|P(T) such that m(Q(T)) is odd. In this case, p^ℤ\(δ(^♭)∩^g)(k) is finite and has cardinalityQ(T)·∏_R(T)(1+m(R(T))),where R(T) runs over all non-self-reciprocal monic irreducible factors of P(T). Here, the group p^ acts onvia the central embedding p^↪ J_b(_p), and the action stabilizes δ(^♭)∩^g.* (Corollary <ref>) Let c = m(Q(T))+1/2. Then 1≤ c ≤ n/2. Assume p>c. Then δ(^♭)∩^g is a disjoint union over its k-points of copies of k[X]/X^c. In particular, the intersection multiplicity at each k-point of δ(^♭)∩^g is the same and equals c. Along the way we also prove a result that should be of independent interest. In <cit.>, Howard–Pappas define closed formal subschemes _Λ offor each vertex lattice Λ (recalled in <ref>).Howard–Pappas study the reduced subscheme _Λ^red detailedly and prove that they form a nice stratification of ^red. We prove: [Theorem <ref>] _Λ = _Λ^red for each vertex lattice Λ. §.§ Novelty of the methodThe results Theorem A and Theorem B are parallel to the results in <cit.> for unitary Rapoport–Zink spaces. The main new difficulty in the GSpin case is due to the fact that, unlike the unitary case, the GSpin Rapoport–Zink spaces are not of PEL type. They are only of Hodge type, and as for now they lack full moduli interpretations that are easy to work with directly (see Remark <ref>).In <cit.>, the most difficult parts are the reducedness of minuscule special cycles <cit.> and the intersection length formula <cit.>. They are the analogues of Theorem B and Theorem A (3) respectively. In <cit.>, they are proved using Zink's theory of windows and displays of p-divisible groups and involve rather delicate linear algebra computation. In contrast, in our method we rarely directly work with p-divisible groups and we completely avoid computations with windows or displays. Instead we make use of what are essentially consequences of Kisin's construction of integral models of Hodge type Shimura varieties to abstractly reduce the problem to algebraic geometry over k. More specifically, we reduce the intersection length computation to the study of a certain scheme of the form S_Λ^g̅ (Proposition <ref>), where S_Λ is a smooth projective k-variety closely related to orthogonal Grassmannians, and g̅ is a certain finite order automorphism of S.Thus our method overcomes the difficulty of non-PEL type and also makes the actual computation much more elementary. It is worth mentioning that our method also applies to the unitary case considered in <cit.>. Even in this PEL type case, our method gives a new and arguably simpler proof of the arithmetic fundamental lemma in the minuscule case. This will be pursued in a forthcoming work.It is also worth mentioning that the very recent work of Bueltel–Pappas <cit.> gives a new moduli interpretation forRapoport–Zink spaces of Hodge type when restricted to p-nilpotent noetherian algebras. Their moduli description is purely group-theoretic (in terms of (G,μ)-displays) and does not involve p-divisible groups. Although we do not use (G,μ)-displays in this article, it would be interesting to see if it is possible to extend the results of this article using their group-theoretic description (e.g., to non-minuscule cases). §.§ Strategy of the proofs Our key observation is that in order to prove these theorems, we only need to understand 𝒪-points offor very special choices of W-algebras 𝒪.To prove Theorem B, it turns out that we only need to understand (W/p^2) and (k[ϵ]/ϵ^2). Note that the W-algebras W/p^2 and k[ϵ]/ϵ^2, when viewed as thickenings of k (under reduction modulo p or ϵ respectively), are objects of the crystalline site of k. For such an object , we prove in Theorem <ref> an explicit description of () and more generally an explicit description of 𝒵(), for any special cycle 𝒵 in . Theorem <ref> is the main tool to prove Theorem B, and is also the only place we use p-divisible groups. This result is a Rapoport–Zink space analogue of a result of Madapusi Pera <cit.> for GSpin Shimura varieties. Its proof also relies on loc. cit. and is ultimatelya consequence of Kisin's construction of the integral canonical models of Hodge type Shimura varieties <cit.>. To prove the intersection length formula Theorem A (3), let Λ be the vertex lattice L(g)^∨. Theorem B allows us to reduce Theorem A (3) to the problem of studying the fixed-point subscheme ofthe smooth k-variety S_Λ≅ p^ℤ\_Λ^red, under the induced action g̅∈(Λ/Λ^∨) of g. Since the fixed point of a smooth k-variety under a group of order coprime to p is still smooth (<cit.>), this point of view immediately explains that when g̅ is semisimple (in which case m(Q(T))=1), the intersection multiplicity must be 1. More generally, we utilize Howard–Pappas's description of S_Λ in <cit.> and reduce the intersection length computation to elementary algebraic geometry of orthogonal Grassmannians over k (Proposition <ref> and Theorem <ref>).The remaining parts of Theorem A are relatively easier. From Theorem B it is not difficult to deduce Theorem A (1). The set of k-points ofis well understood group theoretically in terms of the affine Deligne–Lusztig set. The point counting formula Theorem A (2) essentially only relies on this description, and we follow the strategy in <cit.> to give a short streamlined proof (Proposition <ref>). §.§ Organization of the paperIn <ref>, we review the structure of GSpin Rapoport–Zink spaces and special cycles. In <ref>, we formulate the arithmetic intersection problem of GGP cycles and prove the point-counting formula for the k-points of the intersection in the minuscule case (Theorem A (2)). In <ref>, we prove reducedness of minuscule special cycles (Theorem B). In <ref>, we deduce from Theorem B that the arithmetic intersection is concentrated in the special fiber (Theorem A (1)) and finally compute the intersection length when p is sufficiently large (Theorem A (3)). §.§ Acknowledgments We are very grateful to B. Howard, M. Kisin, M. Rapoport and W. Zhang for helpful conversations or comments. Our debt to the two papers <cit.> and <cit.> should be clear to the readers. § GSPIN RAPOPORT–ZINK SPACES In this section we review the structure of GSpin Rapoport–Zink spaces due to Howard–Pappas <cit.>. We refer to <cit.> for the proofs of these facts. §.§ Quadratic spaces and GSpin groups Let p be an odd prime. Let (V,q) be a non-degenerate self-dual quadratic space over ℤ_p of rank n≥ 3. By definition the Clifford algebra C(V) is the quotient of the tensor algebra V^⊗ by the two sided ideal generated by elements of the form v ⊗ v-q(v). It is free of rank 2^n over ℤ_p. The linear map v↦ -v preserves the quadratic form q on V and induces an involution on C(V). This involution decomposes C(V)=C^+(V) ⊕ C^-(V) into even and odd parts. The image of the injection V↪ C^-(V) generates C(V) as a _p-algebra.We also have a canonical involution *: C(V)→ C(V), which a ℤ_p-linear endomorphism characterized by (v_1v_2⋯ v_k)^*=v_k⋯ v_2v_1 for v_i∈ V. The spinor similitude group G=(V) is the reductive group over ℤ_p such that for a ℤ_p-algebra R,G(R)={g∈ C^+(V)^×: g V_R g^-1=V_R,g^*g∈ R^×}.The character η_G: G→𝔾_m given by g↦ g^*g is the called spinor similitude.The conjugation actiong.v=gvg^-1 of G on C(V)stabilizes V and preserves the quadratic form q. This action thus defines a homomorphismG→(V).The kernel of the above morphism is the central 𝔾_m inside G given by the natural inclusion R^×⊂ G(R) for any _p-algebra R. The restriction of η_G on the central 𝔾_m is given by g↦ g^2. Note that the central _m in G is equal to the identity component of the center of G, and it is equal to the center of G precisely when n is odd. §.§ Basic elements in GSpin groups Let k=𝔽_p, W=W(k) and K=W[1/p]. Let σ∈(W) be the lift of the absolute p-Frobenius on k.Let D=_ℤ_p(C(V), ℤ_p) be the contragredient G-representation of C(V).Any b∈ G(K) determines two isocrystals(V_K, Φ=b∘σ), (D_K, F=b∘σ).Denote by 𝕋 the pro-torus over _p of character group . Recall that b∈ G(K) is basic if its slope morphism ν_b: 𝕋_K→ G_K factors through (the identity component) of Z(G_K), i.e., factors through the central 𝔾_m. By <cit.>, b is basic if and only if (V_K,Φ) is isoclinic of slope 0, if and only if (D_K, F) is isoclinic of slope -ν_b∈(𝕋_K,𝔾_m)≅ℚ. The map b↦ν_b gives a bijection between the set of basic σ-conjugacy classes and the set 1/2ℤ.Moreover, the ℚ_p-quadratic spaceV_K^Φ={x∈ V_K: Φ x=x}has the same dimension and determinant as V_ℚ_p, and has Hasse invariant (-1)^2ν_b (<cit.>)). §.§ Local unramified Shimura–Hodge dataSince V is self-dual, we know that V_ℚ_p has Hasse invariant +1. In particular V contains at least one hyperbolic plane andwe can pick a ℤ_p-basis x_1,…,x_n of V such that the Gram matrix of the quadratic form q is of the form[ 0 1; 1 0; *; *; ⋱; * ] We will fix x_1,…, x_n once and for all. Define a cocharacterμ: 𝔾_m→ G,t↦ t^-1x_1x_2+x_2x_1. Pick an explicit element b=x_3(p^-1x_1+x_2)∈ G(ℚ_p), then one can show that b is basic with ν_b=1/2. Thus V_K^Φ has the opposite Hasse invariant -1 (cf. <ref>). Fix any δ∈ C(V)^× such that δ^*=-δ. Then ψ_δ(c_1,c_2)=Trd(c_1δ c_2^*) defines a non-degenerate symplectic form on C(V), where Trd: C(V)→ℤ_p is the reduced trace. We have a closed immersion into the symplectic similitude groupG↪(C(V), ψ_δ).By <cit.>, the tuple (G, b, μ, C(V)) defines a local unramified Shimura–Hodge datum (in the sense of <cit.>). In fact, for the fixed G and μ,the σ-conjugacy class of b is the unique basic σ-conjugacy class for which(G, b, μ) is a local unramified Shimura–Hodge datum (cf. <cit.>). The tuple (G, b, μ, C(V)) is chosen in such a way that the associated Rapoport–Zink space (see below) provides a p-adic uniformization for the supersingular locus of a relatedShimura variety. For more details on the relation with Shimura varieties see <cit.>. §.§ GSpin Rapoport–Zink spacesThere is a unique (up to isomorphism) p-divisible group 𝕏_0/k such that its (contravariant) Dieudonné module 𝔻(𝕏_0) is given by the W-lattice D_W in the isocrystal D_K. The non-degenerate symplectic form ψ_δ induces a principal polarization λ_0 of 𝕏_0.Fix a collection of tensors (s_α) on C(V) cutting out G from (C(V)) (including the symplectic form ψ_δ). By <cit.>, we have a GSpin Rapoport–Zink space:=(G, b,μ, C(V), (s_α)).It is a formal scheme over W, together with a closed immersion into the symplectic Rapoport–Zink space (𝕏_0,λ_0). Moreover, the formal schemeitself depends only on the local unramified Shimura–Hodge datum (G,b,μ, C(V)), and not on the choices of the tensors (s_α).Denote by (X, ρ ,λ) the universal triple over (𝕏_0 , λ_0), where X is the universal p-divisible group, ρ is the universal quasi-isogeny, and λ is the universal polarization. Consider the restriction of this triple to the closed formal subschemeof (𝕏_0 ,λ_0). We denote this last triple also by (X,ρ ,λ) and call it the universal triple over . Let _W be the category of W-algebras in which p is nilpotent.As a set-valued functor on the category _W, the symplectic Rapoport-Zink space (𝕏_0, λ_0) has an explicit moduli interpretation in terms of triples (X,ρ,λ). In contrast, the subfunctor defined bydoes not have an explicit description. In fact, in <cit.> Howard–Pappas only give a moduli interpretation ofwhen it is viewed as a set-valued functor on a more restricted category . In this article we do not make use of this last moduli interpretation. All we will need is the global construction ofas a formal subscheme of (𝕏_0, λ_0) due to Howard–Pappas.Over , the universal quasi-isogeny ρ respects the polarizations λ and λ_0 up to a scalar c(ρ)∈ℚ_p^× , i.e., ρ^∨∘λ∘ρ=c^-1(ρ)·λ_0 (Zariski locally on _k). Let ^(ℓ)⊆ be the closed and open formal subscheme where _p(c(ρ))=ℓ.We have the decomposition into a disjoint union=∐_ℓ∈ℤ^(ℓ).In fact each ^(ℓ) is connected and they are mutually (non-canonically) isomorphic. cf. <cit.>. §.§ The group J_b The algebraic group J_b=(V_K^Φ) has ℚ_p-pointsJ_b(ℚ_p)={g∈ G(K): gb=bσ(g)},and J_b(_p) acts onvia its action on 𝕏_0 as quasi-endomorphisms. The action of g ∈ J_b(_p) onrestricts to isomorphisms ^(ℓ)^(ℓ + _p (η_b (g))),  ℓ∈ where η_b : J_b(_p) →_p^× is the spinor similitude. In particular, p^ℤ⊆ J_b(ℚ_p) acts onand since η_b (p) = p^2, we have an isomorphismp^ℤ\≅^(0)∐^(1). In this article we are interested in studying the fixed locus ^g ofunder g ∈ J_b (_p). By (<ref>) this is non-empty only when _p (η_b (g)) = 0. Since p^ is central in J_b(_p), one could also study (p^\)^g for g∈ J_b(_p). However by (<ref>), we know that (p^\)^g ≠∅ only if _p (η_b (g)) is even, and in this case(p^\)^g ≅ p^\^g_0,where g_0=p^-_p (η_b (g))/2 g. Hence the study of (p^\)^g for general g reduces to the study of ^g for g satisfying _p(η_b(g))=0. §.§ Special endomorphisms Using the injection V↪ C(V)^op, we can viewV⊆_ℤ_p(D)as special endomorphisms of D: the action of v∈ V on D is explicitly given by(vd)(c)=d(vc), d∈ D, c∈ C(V). Base changing to K gives V_K⊆_K(D_K). Since the F-equivariant endomorphisms _K,F(D_K) can be identified with the space of quasi-endomorphisms ^0(𝕏_0) of 𝕏_0, we obtain an embedding of _p-vector spacesV_K^Φ↪^0(𝕏_0).Elements of V_K^Φ are thus viewed as quasi-endomorphisms of 𝕏_0, and we call them special quasi-endomorphisms. §.§ Vertex latticesA vertex lattice is a ℤ_p-lattice Λ⊆ V_K^Φ such thatpΛ⊆Λ^∨⊆Λ.We defineΩ_0=Λ/Λ^∨.Then the quadratic form v↦ p· q(v) makes Ω_0 a non-degenerate quadratic space over 𝔽_p. The type of Λ is defined to be t_Λ:=_𝔽_pΩ_0.By <cit.>, the type of a vertex lattice is always an even integer such that 2≤ t_Λ≤ t_max , wheret_max=n-2,if n is even and(V_ℚ_p)=(-1)^n/2∈ℚ_p^×/(ℚ_p^×)^2, n-1,if n is odd, n,if n is even and (V_ℚ_p)(-1)^n/2∈ℚ_p^×/(ℚ_p^×)^2. It follows that the quadratic space Ω_0 is always non-split, because otherwise a Lagrangian subspace ℒ⊆Ω_0 would provide a vertex lattice Λ^∨+ℒ⊆ V_K^Φ of type 0 (cf. <cit.>)§.§ The variety S_Λ DefineΩ=Ω_0 ⊗_𝔽_p k≅Λ_W/Λ_W^∨.Let d=t_Λ/2. Let (Ω) be the moduli space of Lagrangian subspaces ℒ⊆Ω. We define S_Λ⊆(Ω) to be the reduced closed subscheme of (Ω) with k-points given as follows:S_Λ(k) ={Lagrangian subspaces ℒ⊆Ω: (ℒ+Φ(ℒ))=d+1}≅{(ℒ_d-1,ℒ_d): ℒ_d ⊆Ω Lagrangian, ℒ_d-1⊆ℒ_d∩Φℒ_d,ℒ_d-1=d-1},where the last bijection is given by ℒ↦ (ℒ∩Φℒ, ℒ).More precisely, for any k-algebra R, the R-points S_Λ (R) is the set of pairs (_d-1, _d) such that: * _d is a totally isotropic R-submodule of Ω⊗_k R that is an R-module local direct summand of Ω⊗_k R and of local rank d,* _d-1 is an R-module local direct summand of Ω⊗_k R and of local rank d-1,* _d-1⊂_d ∩Φ_d, where Φ acts on Ω⊗_k R via the p-Frobenius on R. In particular, _d-1 is totally isotropic, and is a local direct summand of _d and of Φ_d. (For the last statement see Remark <ref> below.) By <cit.>, S_Λ is a k-variety with two isomorphic connected components S_Λ^±, each being projective and smooth of dimension t_Λ/2-1.For more details, see <cit.> and <cit.>. In the sequel we will frequently use the following simple fact without explicitly mentioning it. Let R be a commutative ring and M a free R-module of finite rank. Suppose M_1, M_2 are submodules of M that are local direct summands. Suppose M_1 ⊂ M_2. Then M_1 is a local direct summand of M_2, and both M_1 and M_2 are locally free. §.§ Structure of the reduced scheme ^red For a vertex lattice Λ, we define _Λ⊆ to be locus where ρ∘Λ^∨∘ρ^-1⊆(X), i.e. the quasi-endomorphisms ρ∘ v∘ρ^-1 lift to actual endomorphisms for any v∈Λ^∨. In other words, if we define a locus (𝕏_0,λ_0)_Λ using the same condition inside (𝕏_0, λ_0) (a closed formal subscheme by <cit.>), then _Λ is the intersection ofwith (𝕏_0,λ_0)_Λ inside (𝕏_0,λ_0). In particular, _Λ is a closed formal subscheme of . Consider the reduced subscheme ^(ℓ) , red of ^(ℓ).By the result <cit.>, the irreducible components of ^(ℓ),red are precisely _Λ^(ℓ),red, where Λ runs through the vertex lattices of the maximal type t_Λ=t_max. Moreover, there is an isomorphism of k-schemes (<cit.>)p^ℤ\_Λ^red S_Λ, which also induces an isomorphism between _Λ^(ℓ),red and S_Λ^±, for each ℓ∈. In particular, ^red is equidimensional of dimension t_max/2-1. §.§ The Bruhat–Tits stratificationFor any vertex lattices Λ_1 and Λ_2, the intersection _Λ_1^red∩_Λ_2^red is nonempty if and only if Λ_1∩Λ_2 is also a vertex lattice, in which case it is equal to _Λ_1∩Λ_2^red (<cit.>). In this way we obtain a Bruhat–Tits stratification on ^red. Associated to a vertex lattice Λ, we define an open subscheme of _Λ^red given by_Λ=^red_Λ-⋃_Λ'⊊Λ_Λ'^red. Then^red=∐_Λ_Λis a disjoint union of locally closed subschemes, indexed by all vertex lattices. §.§ Special latticesOne can further parametrize the k-points in each _Λ using special lattices. We say a W-lattice L⊆ V_K is a special lattice if L is self-dual and (L+Φ(L))/L≅ W/pW.We have a bijection (<cit.>)p^ℤ\(k){special latticesL⊆ V_K}. To construct this bijection, one uses the fact (<cit.>) that p^ℤ\(k) can be identified with the affine Deligne–Lusztig set X_G,b,μ^σ(k)={g∈ G(K): g^-1b σ(g)∈ G(W)μ^σ(p)G(W)}/G(W).The special lattice associated to g∈ G(K) is then given by gμ(p^-1).V_W⊆ V_K. Conversely, given a special lattice L⊆ V_K, then there exists some g∈ G(K) such that gμ(p^-1). V_W=L and g. V_W=Φ(L). The point in (k) then corresponds to the image of g in X_G,b,μ^σ(k). The Dieudonné module of the p-divisible group at this point is given by M=g D_W⊆ D_K and the image of Verschiebung is (F^-1 p) M=g· pμ(p^-1) D_W. Suppose x_0∈(k) corresponds to the special lattice L under (<ref>). Let M=𝔻(X_0) ⊂ D_K be the Dieudonné module of the p-divisible group X_0 corresponding to x_0.Then we have (cf. <cit.>)L = v∈ V_K | v (F^-1p)M ⊂ (F^-1p) M ,Φ L = v∈ V_K| v M ⊂ M.Here we view V_K ⊂_K(D_K) as in <ref>.§.§ Special lattices and vertex lattices For any vertex lattice Λ, the bijection (<ref>) induces a bijection p^ℤ\_Λ(k){special latticesL⊆ V_K: Λ_W^∨⊆ L ⊆Λ_W} = {special latticesL⊆ V_K: Λ_W^∨⊆ L }Sending a special lattice L to ℒ: =L/Λ_W^∨⊆Ω gives a bijection between the right hand side of (<ref>) and S_Λ(k), which is the effect of the isomorphism (<ref>) on k-points.For each special lattice L⊆ V_K, there is a unique minimal vertex lattice Λ(L)⊆ V_K^Φ such thatΛ(L)_W^∨⊆ L⊆Λ(L)_W.In fact, let L^(r)=L+Φ(L)+⋯+Φ^r(L). Then there exists a unique integer 1≤ d≤ t_max/2 such that L^(i)⊊ L^(i+1) for i<d, and L^(d)=L^(d+1). Then L^(i+1)/L^(i) all have W-length 1 for i<d, andΛ(L):=(L^(d))^Φ⊆ V_K^Φis a vertex lattice of type 2d and Λ(L)^∨=L^Φ. Notice that Λ(L)_W is the smallest Φ-invariant lattice containing L and Λ(L)^∨_W is the largest Φ-invariant lattice contained in L. It follows that the element of (k) corresponding to a special lattice L lies in _Λ if and only if Λ(L)⊆Λ, and it lies in _Λ if and only if Λ(L)=Λ. Thus we have the bijectionp^ℤ\_Λ(k){Lspecial lattices: Λ(L)=Λ}.§.§ Deligne–Lusztig varieties For any vertex lattice Λ, by <cit.>,p^ℤ\_Λ is a smooth quasi-projective variety of dimension t_Λ/2-1, isomorphic to a disjoint union of two Deligne–Lusztig varieties X_B(w^±) associated to two Coxeter elements w^± in the Weyl group of (Ω_0). Here Ω_0 := Λ/Λ^∨ is the quadratic space over _p defined in Definition <ref>. In particular, the k-variety p^ℤ\_Λ only depends on the quadratic space Ω_0.Let us recall the definition of X_B(w^±). Let d=t_Λ/2. Let · , · be the bilinear pairing on Ω_0. Since Ω_0 is a non-degenerate non-split quadratic space over 𝔽_p (<ref>), one can choose a basise_1,…, e_d, f_d,…, f_1 of Ω such that ⟨ e_i,f_i⟩=1 and all other pairings between the basis vectors are 0, and Φ fixes e_i,f_i for i=1,…,d-1 and interchanges e_d with f_d. This choice of basis gives a maximal Φ-stable torus T⊆(Ω) (diagonal under this basis), and a Φ-stable Borel subgroup B⊇ T as the common stabilizer of the two complete isotropic flagsℱ^±: ⟨ e_1⟩⊆⟨ e_1,e_2⟩⊆⋯⊆⟨ e_1, …,e_d-1, e_d^±⟩,where e_d^+:=e_d and e_d^-:=f_d. Let s_i (i=1,…,d-2) be the reflection e_i↔ e_i+1, f_i↔ f_i+1 and let t^± be the reflection e_d-1↔ e_d^±, f_d-1↔ e_d^∓. Then the Weyl group W(T)=N(T)/T is generated by s_1, ⋯, s_d-2, t^+, t^-. We also know that W(T) sits in a split exact sequence0→ (ℤ/2 ℤ)^d-1→ W(T)→ S_d→0.Since Φ fixes s_i and swaps t^+ and t^-, we know the d-1 elements s_1,…,s_d-2, t^+ (resp. s_1, …, s_d-2, t^-) form a set of representatives of Φ-orbits of the simple reflections. Thereforew^±:= t^∓ s_d-2⋯ s_2s_1∈ W(T)are Coxeter elements of minimal length. The Deligne–Lusztig variety associated to B and the Coxeter element w^± is defined to beX_B(w^±):={g∈(Ω)/B: (g, Φ(g))=w^±},where (g,h)∈B\(Ω)/B≅ W(T) is the relative position between the two Borels gBg^-1 and hBh^-1. The variety X_B(w^±) has dimension d-1. Under the map g↦ gℱ^±, the disjoint union X_B(w^+)∐ X_B(w^-) can be identified with the variety of complete isotropic flagsℱ: ℱ_1⊆ℱ_2⊆⋯⊆ℱ_dsuch that ℱ_i=ℱ_i-1+Φ(ℱ_i-1) and _k( ℱ_d+Φ(ℱ_d))=d+1. The two components are interchanged by an orthogonal transformation of determinant -1. Notice that such ℱ is determined by the isotropic line ℱ_1 byℱ_i=ℱ_1+Φ(F_1)⋯+Φ^i-1(ℱ_1),and is also determined by the Lagrangian ℱ_d byℱ_i=ℱ_d∩Φ(ℱ_d)∩…∩Φ^d-i(ℱ_d). The bijection (<ref>) induces a bijection p^ℤ\_Λ(k) X_B(w^+)(k)∐ X_B(w^-)(k)by sending a special lattice L with Λ(L)=Λ to the flag determined by the Lagrangian ℱ_d=L/Λ^∨_W. This bijection is the restriction of the isomorphism (<ref>) on k-points and we obtain the desired isomorphism p^ℤ\_Λ≅ X_B(w^+)∐ X_B(w^-).§.§ Special cycles For an m-tuple 𝐯=(v_1,…,v_m) of vectors in V_K^Φ, define its fundamental matrix T(𝐯)=(⟨ v_i, v_j⟩)_i,j=1,…, m. We define the special cycle 𝒵(𝐯)⊆ to be the locus where ρ∘ v_i∘ρ^-1∈(X), i.e., all the quasi-endomorphisms ρ∘ v_i∘ρ^-1 lift to actual endomorphisms on X (i=1,…,m). Similar to Definition <ref>, 𝒵(𝐯) is a closed formal subscheme of , which is the intersectionwith the analogously defined cycle inside (𝕏_0,λ_0). Since 𝒵(𝐯) only depends on the _p-submodule span__p (𝐯) of V_K^Φ, we also write 𝒵( span__p (𝐯) ). Let x_0 ∈(k) correspond to L under (<ref>). Let 𝐯 be an arbitrary _p-submodule of V_K^Φ. By Remark <ref> we know that x_0 ∈𝒵(𝐯) if and only if 𝐯⊂Φ L, if and only if 𝐯⊂Φ L ∩ L (as 𝐯 is Φ-invariant). When m=n and T(𝐯) is non-singular, we obtain a latticeL(𝐯)=ℤ_p v_1+⋯ℤ_p v_n⊆ V_K^Φ.By the Cartan decomposition, T(𝐯) ∈_n (_p)(p^r_1, p^r_2,⋯, p^r_n) _n(_p) for a unique non-increasing sequence of integers r_1≥⋯≥ r_n. Note that if we view the matrix T(𝐯)^-1 as a linear operator V_K^Φ→ V_K^Φ using the basis 𝐯, it sends 𝐯 to the dual basis of 𝐯, and in particular it sends any _p-basis of L(𝐯) to a _p-basis of L(𝐯) ^∨. Therefore the tuple (r_1,⋯, r_n) is characterized by the condition that there is a basis e_1,…,e_n of L(𝐯) such that p^-r_1e_1,…, p^-r_ne_n form a basis of L(𝐯)^∨. From this characterization we also see that the tuple (r_1,⋯, r_n) is an invariant only depending on the lattice L(𝐯). We say 𝐯 is minuscule if T(𝐯) is non-singular and r_1=1,   r_n≥0. Suppose m=n and T(𝐯) is non-singular. Then 𝐯 is minuscule if and only if L( 𝐯) ^∨ is a vertex lattice. In this case by definition 𝒵(𝐯) = _L(𝐯) ^∨.§ THE INTERSECTION PROBLEM AND THE POINT-COUNTING FORMULA§.§ The GSpin Rapoport–Zink subspace From now on we assume n≥4. Suppose the last basis vector x_n∈ V has norm 1. Then the quadratic subspace of dimension n-1V^♭=ℤ_px_1+⋯ℤ_px_n-1is also self-dual. Let G^♭=(V^♭). Analogously we define the elementb^♭=x_3(p^-1x_1+x_2)∈ G^♭(ℚ_p)and the cocharacterμ^♭: 𝔾_m→ G^♭, t↦ t^-1x_1x_2+x_2x_1.As in <ref>, we have an associated GSpin Rapoport–Zink space^♭=(G^♭, b^♭, μ^♭, C(V^♭)).The embedding V^♭↪ V induces an embedding of Clifford algebras C(V^♭)↪ C(V) and a closed embedding of group schemes G^♭↪ G over ℤ_p, which maps b^♭ to b and μ^♭ to μ. Thus by the functoriality of Rapoport–Zink spaces (<cit.>), we have a closed immersion δ: ^♭↪of formal schemes over W.§.§ Relation with the special divisor 𝒵(x_n) For compatible choices of symplectic forms ψ^♭ on C(V^♭) and ψ on C(V), the closed embedding of group schemes (C(V^♭),ψ^♭)↪(C(V),ψ) induces a closed immersion of symplectic Rapoport–Zink spaces (<ref>)ϕ: (𝕏_0^♭,λ_0^♭)↪(𝕏_0, λ_0).Since we have a decomposition of (C(V^♭),ψ^♭)-representationsC(V)≅ C(V^♭) ⊕ C(V^♭) x_n,we know the moduli interpretation of ϕ is given by sending a triple(X^♭,ρ^♭,λ^♭) to the p-divisible group X=X^♭⊕ X^♭ with the quasi-isogeny ρ=ρ^♭⊕ρ^♭ and polarization λ=λ^♭⊕λ^♭.By the functoriality of Rapoport–Zink spaces (<cit.>), we have a commutative diagram of closed immersions^♭@^(->[r]^δ@^(->[d]@^(->[d] (𝕏_0^♭,λ_0^♭) @^(->[r]^ϕ (𝕏_0, λ_0).Here the two vertical arrows are induced by the closed immersions (V^♭)↪(C(V^♭),ψ^♭) and(V)↪(C(V),ψ) (<ref>).Diagram (<ref>) is Cartesian, i.e., we have δ(^♭)=ϕ((𝕏_0^♭,λ_0^♭))∩ inside (𝕏_0,λ_0).By flat descent, to show that the closed formal subschemes on the two sides of (<ref>) agree, it suffices to show that they have the same k-points and the same formal completion at every k-point (cf. <cit.>). The claim then follows from the observation that both the k-points and the formal completions have purely group theoretic description.In fact, the k-points of ^♭=_G^♭, (𝕏_0^♭,λ_0^♭)=_H and =_G have the group theoretic description as the affine Deligne–Lusztig sets (<ref>) associated to the groups G^♭=(V^♭), H=(C(V^♭), ψ^♭) and G=(V) respectively. Since G^♭=H∩ G inside (C(V)), we know that both sides of (<ref>) have the same k-points. Fix a k-point x∈^♭(k), then by <cit.>, _G^♭,x can be identified with U_G^♭^μ_x,∧, where μ_x: 𝔾_m,W→ G^♭_W gives a filtration that lifts the Hodge filtration for x, U_G^♭^μ_x⊆ G^♭ is the unipotent radical of the opposite parabolic group defined by μ_x (<cit.>) andU_G^♭^μ_x,∧ is its formal completion along its identity section over W. Similarly, we can identify _H,x and _G,x as U_H^μ_x,∧ and U_G^μ_x,∧. Again because G^♭=H∩ G, we know that the formal completions at x of both sides of (<ref>) agree inside U_(C(V))^μ_x,∧.δ(^♭)=𝒵(x_n). Let X^♭ be the universal p-divisible group over ^♭ and ρ^♭ be the universal quasi-isogeny. Then it follows from the commutative diagram (<ref>) that the image of (X^♭,ρ)under δ is given by the p-divisible group (X^♭⊕X^♭,ρ^♭⊕ρ^♭). Since x_n has norm 1, right multiplication by x_n swaps the two factors C(V^♭) and C(V^♭)x_n.It follows that the quasi-endomorphism(ρ^♭⊕ρ^♭ )∘ x_n∘ (ρ^♭⊕ρ^♭ )^-1: (X^♭⊕ X^♭)→(X^♭⊕ X^♭)(uniquely determined by the rigidity of quasi-isogenies) simply swaps the two factors, which is an actual endomorphism (i.e., swapping) of X^♭⊕ X^♭. By Definition <ref> of 𝒵(x_n), we have δ(^♭)⊆𝒵(x_n).Conversely, over 𝒵(x_n) the universal p-divisible group X admits an action of C(x_n)^op⊗ C(V), where C(x_n) is the Clifford algebra of the rank one quadratic space ℤ_p x_n. NoticeC(x_n)^op⊗ C(V)≅ (C(x_n)^op⊗ C(x_n)) ⊕ (C(x_n)^op⊗ C(V^♭)).It follows that over 𝒵(x_n) the universal p-divisible group X admits an action of C(x_n)^op⊗ C(x_n), which is isomorphic to the matrix algebra M_2(ℤ_p). The two natural idempotents of M_2(ℤ_p) then decomposes X as a direct sum of the form X^♭⊕ X^♭. Hence 𝒵(x_n)⊆ϕ((𝕏_0^♭,λ_0^♭))∩. The latter is equal to δ(^♭) by (<ref>) and hence 𝒵(x_n)⊆δ(^♭).In the following we will only use the inclusion δ(^♭)⊆𝒵(x_n).§.§ Arithmetic intersection of GGP cyclesThe closed immersion δ induces a closed immersion of formal schemes(𝕀, δ): ^♭→^♭×_W .Denote by Δ the image of (𝕀,δ), which we call the GGP cycle. The embedding V^♭↪ V also induces an embedding of quadratic spaces V^♭,Φ_K↪ V_K^Φ and hence we can viewJ_b^♭=(V^♭,Φ_K)↪ J_bas an algebraic subgroup over ℚ_p.For any g∈ J_b(ℚ_p), we obtain a formal subschemegΔ:=(𝕀× g)Δ⊆^♭×_W ,via the action of g on . Our goal is to compute the arithmetic intersection number⟨Δ,gΔ⟩,when g is regular semisimple and minuscule.We say g∈ J_b(ℚ_p) is regular semisimple if the𝐯(g):=(x_n, gx_n,…, g^n-1x_n) forms a ℚ_p-basis of V_K^Φ. Equivalently, the fundamental matrix T(g):=T(𝐯(g)) is non-singular (Definition <ref>). Equivalently, the stabilizer of g, for the conjugation action of the subgroup J_b^♭, lies in the center (≅𝔾_m) of J_b^♭. We say g is minuscule if 𝐯(g) is minuscule (Definition <ref>).§.§ Fixed points Let g∈ J_b(_p) and let ^g⊆ be the fixed locus of g. Then by definition we haveΔ∩ gΔ≅δ(^♭)∩^g.Let g∈ J_b(ℚ_p) be regular semisimple. We define the latticeL(g):=ℤ_p x_n+⋯ℤ_p g^n-1x_n⊆ V_K^Φ. Insideboth the formal subschemes ^g and δ(^♭) are stable under p^. Moreover, under the bijection (<ref>), we have* p^ℤ\δ(^♭(k))≅{L=L^♭⊕ W x_n: L^♭⊆ V^♭_Kspecial lattices}.* p^ℤ\δ(^♭(k))≅{L special lattices: x_n ∈ L }.* p^ℤ\^g(k)≅{Lspecial lattices: gL=L}.* p^ℤ\(δ(^♭(k))∩^g(k))≅{Lspecial lattices: gL=L, L⊇ L(g)_W}. Since p^ is central in J_b(_p), we know ^g is stable under p^. The morphism δ: ^♭→ is equivariant with respect to the natural inclusion J_b^♭ (_p) → J_b (_p), and the morphism J_b^♭→ J_b restricts to the identity between the centers _m of J_b^♭ and of J_b. It follows that δ is equivariant for the p^ action, and so δ(^♭) is stable under p^. We now prove the statements (1) to (4).* For a point L^♭∈ p^ℤ\^♭(k), we can write L^♭=h^♭μ^♭(p^-1).V^♭_W⊆ V^♭_K, for some h^♭∈ G^♭(K). Then its image under δ is given by L=hμ(p^-1).V_W⊆ V_K, where h is the image of h^♭ in G(K). By V=V^♭⊕ℤ_p x_n and the compatibility between h,μ and h^♭, μ^♭, we know that L=L^♭⊕ W x_n. * Suppose L is a special lattice with x_n∈ L. Since x_n has norm 1, we know that L=L' ⊕ Wx_n is the direct sum of Wx_n and its orthogonal complement L' in L. One can check L'⊆ V_K^♭ is also a special lattice. This finishes the proof in view of item (1). * This is clear since ^g(k) is the fixed locus of g.* For a point L∈ p^ℤ\(δ(^♭(k))∩^g(k)), by items (1) (3), we have L=L^♭⊕ W x_n and gL=L. It follows from x_n∈ L that gx_n,…, g^n-1x_n∈ L, and so L⊇ L(g)_W. Conversely, if a point L∈(k) satisfies gL=L and L ⊃ L(g)_W, then L ∈ p^ℤ\(δ(^♭(k))∩^g(k)) by items (2) and (3) We say a vertex lattice Λ isa g-vertex lattice if gΛ=Λ and Λ⊆ L(g)^∨. Denote the set of all g-vertex lattices by (g). In general, if a vertex lattice Λ satisfies gΛ=Λ, then g induces an action on Ω_0=Λ/Λ^∨, which further induces an action g̅ on _Λ^red and _Λ. We denote the fixed locus of g̅ on _Λ by _Λ^g̅.p^ℤ\(δ(^♭)∩^g)(k)=∐_Λ∈(g)p^ℤ\_Λ^g̅ (k). By Lemma <ref>,it suffices to show the k-points of the right hand side are in bijection with special lattices L such that gL=L and L⊇ L(g)_W. Notice that any special lattice L is self-dual, so the condition L⊇ L(g)_W is equivalent to the condition L⊆ L(g)_W^∨. Since Λ(L)_W is the minimal Φ-invariant lattice containing L (<ref>), and L(g)_W^∨ is Φ-invariant, we know that the condition L⊆ L(g)_W^∨ is equivalent to the condition Λ(L)⊆ L(g)^∨. The result now follows from taking g̅-invariants and g-invariants of the two sides of the bijection (<ref>).§.§ Fixed points in a Bruhat–Tits stratum Let Λ be a vertex lattice and Ω_0=Λ/Λ^∨ (<ref>). By the isomorphism (<ref>), p^ℤ\_Λ is disjoint union of two isomorphic Deligne–Lusztig varieties X_B(w^±) associated to the Coxeter elements w^± for (Ω_0). Write X:=X_B(w^±). To compute p^ℤ\_Λ^g̅, it suffices to compute the g̅-fixed points X^g̅. We say a semisimple element g̅∈(Ω_0) is regularif Z^∘(g̅), the identity component of the centralizer of g̅ in (Ω_0), is a (necessarily maximal) torus[Note the difference with Definition <ref>. The conflict of the usage of the word "regular" should hopefully not cause confusion.].Let Λ be a vertex lattice and let g̅∈(Ω_0)(𝔽_p). *X^g̅ is non-empty if and only if g̅ is semisimple and contained in a maximal torus of Coxeter type.*X^g̅ is non-empty and finite if and only if g̅ is regular semisimple and contained in a maximal torus of Coxeter type. In this case, the cardinality of X^g̅ is given by t_Λ/2. Recall that a maximal torus T' is of Coxeter type if T'=h T h^-1 for some h∈(Ω_0) such that h^-1Φ(h) lifts to aCoxeter element w in the Weyl group W(T)=N(T)/T. In other words, T' is conjugate to T over k but its Frobenius structure is given by w·Φ. For the Coxeter element w=w^± constructed in <ref>, we know that an element (λ_1,…,λ_d,λ_d^-1,…,λ_1^-1) of T(k) is fixed by w·Φ if and only if(λ_1,λ_2,…,λ_d-1, λ_d)=(λ_d^∓ p,λ_1^p …, λ_d-2^p,λ_d-1^± p). It follows that a semisimple element g̅∈(Ω_0)(𝔽_p) is contained in a maximal torus of Coxeter type if and only if the eigenvalues of g̅ on Ω_0 ⊗ k belong to a single Galois orbit. * Suppose X^g̅ is non-empty. Then it is a general fact about Deligne–Lusztig varieties that g̅ must be semisimple (<cit.>). Let T(w)⊆(Ω_0) be a torus of Coxeter type (associated to w=w^+ or w^-) and B(w)⊇ T(w) be a Borel. Assume g̅ is semisimple. Then we know from <cit.> that X^g̅ is a disjoint union of Deligne-Lusztig varieties X_T'⊆ B' for the group G'=Z^∘(g̅) and the pairs(T', B')=(h T(w) h^-1, h B(w) h^-1∩ G'),where h runs over classes G'(𝔽_p)\(Ω_0)(𝔽_p) such that g̅∈ h T(w)h^-1. Therefore X^g̅ is non-empty if and only if there exists h∈(Ω_0)(𝔽_p) such that g̅∈ h T(w) h^-1, if and only if g̅ is contained in a maximal torus of Coxeter type (as so is T(w)). * By part (1) we know that X^g̅ is further finite if and only if all X_T'⊆ B' are zero dimensional, if and only if all B'=hBh^-1∩ G' are tori. This happens exactly when G'=Z^∘(g̅) itself is a torus, i.e., when g̅ is regular. In this case, G' is a maximal torus of Coxeter type in (Ω_0) and the cardinality of X^g̅ is equal to the cardinality of N(T(w))(𝔽_p)/T(w)(𝔽_p). The latter group is isomorphic to (N(T(w))/T(w))^Φ by Lang's theorem and hence is isomorphic to the Φ-twisted centralizer of w in the Weyl group W(T)=N(T)/T:Z_Φ(w):={x∈ W(T): xw=wΦ(x)}.The cardinality of Z_Φ(w) is known as the Coxeter number of the group (Ω_0), which is equal to d=t_Λ/2 since (Ω_0) is a non-split even orthogonal group (<cit.>).§.§ Point-counting in the minuscule caseLet g∈ J_b(ℚ_p) be regular semisimple and minuscule. Then Ω_0(g):=L(g)^∨/L(g) is a 𝔽_p-vector space (see Definition <ref>), and hence L(g)^∨ is a vertex lattice. If ^g is non-empty, then g fixes some vertex lattice and so we know that the characteristic polynomial of g has _p-coefficients. It follows that L(g) is a g-stable lattice, from which it also follows easily that L(g)^∨ is g-stable. Hence by definition L(g)^∨ is a g-vertex lattice. The induced action of g on Ω_0(g), denoted by g̅∈(Ω_0(g))(𝔽_p), makes Ω_0(g) a g̅-cyclic 𝔽_p-vector space. It follows that the minimal polynomial of g̅ is equal to its characteristic polynomial. From now on we assume ^g is non-empty. Let g̅∈(Ω_0(g))(𝔽_p) be as in Remark <ref>.For any polynomial R(T), we define its reciprocal to beR^*(T):=T^ R(T)· R(1/T).We say R(T) is self-reciprocal if R(T)=R^*(T).Let P(T)∈𝔽_p[T] be the characteristic polynomial of g̅∈(Ω_0(g)). Then P(T) is self-reciprocal. For any monic irreducible factor Q(T) of P(T), we denote by m(Q(T)) to be the multiplicity of Q(T) appearing in P(T). Assume ^g is non-empty. Thenp^ℤ\(δ(^♭)∩^g)(k) is non-empty if and only if P(T) has a unique self-reciprocal monic irreducible factor Q(T) such that m(Q(T)) is odd. In this case, p^ℤ\(δ(^♭)∩^g)(k) is finite and has cardinalityQ(T)·∏_R(T)(1+m(R(T))),where R(T) runs over all non-self-reciprocal monic irreducible factors of P(T).By Proposition <ref>, we know that p^ℤ\(δ(^♭)∩^g)(k) is non-empty if and only if p^ℤ\_Λ^g is non-empty for some Λ∈(g). For any Λ∈(g), by definition we have a chain of inclusions of latticesL(g)⊆Λ^∨⊆Λ⊆ L(g)^∨,which induces a filtration of 𝔽_p-vector spaces,0⊆Λ^∨/L(g)⊆Λ/L(g)⊆Ω_0(g).It follows that the map Λ↦Λ^∨/L(g) gives a bijection(g)≅{totally isotropic g̅-invariant subspaces U⊆Ω_0(g)}.By the bijection (<ref>), (g) is non-empty if and only if there is a totally isotropic g̅-invariant subspace U of Ω_0(g). Such a subspace U induces a filtration0⊆ U⊆ U^⊥⊆Ω_0(g). Since U and U^⊥ are g̅-invariant, we obtain a decomposition of the characteristic polynomialP(T)=P_1(T)Q(T)P_2(T)where P_1(T), Q(T), P_2(T) are respectively the characteristic polynomials of g̅ acting on the associated graded U, U^⊥/U and Ω_0(g)/U^⊥. Notice the non-degenerate quadratic form on Ω_0(g) identifies Ω_0(g)/U^⊥ with the linear dual of U, from which we know that P_2(T)=P_1^*(T). Similarly, we know that Q(T)=Q^*(T), i.e., Q(T) is self-reciprocal.Let Λ=L(g)+U^⊥ be the g-vertex lattice corresponding to U under the bijection (<ref>) and let Ω_0=Λ/Λ^∨ and g̅_0∈(Ω_0)(𝔽_p) be the induced action of g̅ on Ω_0. By Remark <ref>, the minimal polynomial of g̅ is equal to its characteristic polynomial P(T). Thus the minimal polynomial of g̅_0 is equal to its characteristic polynomial Q(T)under the decomposition (<ref>). If g̅_0 is semisimple, then its eigenvalues are distinct. If g̅ is further contained in a torus of Coxeter type, then we know that its eigenvalues belong to a single Galois orbit (Remark <ref>), so Q(T) is irreducible. Conversely, if Q(T) is irreducible, then clearly g̅_0 is semisimple and contained in a torus of Coxeter type. Hence we know that g̅_0 is semisimple and contained in a torus of Coxeter type if and only if Q(T) is irreducible.Therefore byProposition <ref> (<ref>), _Λ^g̅_0 is non-empty if and only if Q(T) is irreducible.In this case, g̅_0 is indeed regular semisimple and the cardinality of p^ℤ\_Λ^g̅_0 is equal 2·#X^g̅_0 (due to two connected components), which is equal to _𝔽_pΩ_0= Q(T) by Proposition <ref> (<ref>).Since P_2(T)=P_1^*(T), we know the multiplicity of R(T) in P_1(T)P_2(T) is even for any self-reciprocal factor R(T). Hence Q(T) is the unique self-reciprocal monic irreducible factor of P(T) such that m(Q(T)) is odd. Finally, the factorizations(<ref>) with P_2(T)=P_1^*(T) corresponds bijectively to the filtrations (<ref>). The proof is now finished by noticing that the number of such factorizations is exactly given by ∏_R(T)(1+m(R(T))), where R(T) runs over all monic irreducible factors of P(T) such that R(T) R^*(T).§ THE REDUCEDNESS OF MINUSCULE SPECIAL CYCLES §.§ Review of Faltings's explicit deformation theoryWe review Faltings's explicit deformation theory of p-divisible groups <cit.> 7, following <cit.> and <cit.>. We allow k to be an arbitrary perfect field of characteristic p>2. We use the notations X_0, (s_α), G, μ to denote more general objects: * X_0 is an arbitrary p-divisible group over k. * Define M_0 : = 𝔻 (X_0) (W). It is equipped with the Frobenius, denoted by ϕ_M_0. We will interchangeably think of ϕ_M_0 as a W-linear map ϕ_W^* M_0 → M_0 or as a ϕ_W-semi-linear map M_0 → M_0, where ϕ_W is the Frobenius on W. This convention about semi-linear maps will also be adopted in the following for other Frobenii. * G ⊂(M_0) is a reductive subgroup (over W) cut out by a family of tensors (s_α). * We assume (s_α) are ϕ_M_0-invariant, when viewed as tensors over M_0 [1/p]. (cf. <cit.> Footnote 6 on P 17.) * We assume the Hodge filtration on 𝔻(X_0) (k) = M_0 ⊗ _W k is G⊗ _W k-split, in the sense that it admits a splitting whose corresponding cocharacter μ_0 : _m →(M_0 ⊗ _W k) factors through G⊗ _W k. We fix the choice of such a μ_0. * Fix the choice of a lift μ : _m → G of μ_0. * Let ^1 M_0 ⊂ M_0 be the filtration defined by μ. This gives rise to a p-divisible group X_0,W over W lifting X_0. Let U^o (resp. U^o_G) be the opposite unipotent in (M_0) (resp. G) defined by μ. Let R (resp. R_G) be the complete local ring at the identity section of U^o (resp. U^0_G). Let ι: R → R_G be the natural quotient map.Choose an isomorphism of W-algebras:_R: RW [[ t_1,⋯, t_n]].This defines a Frobenius ϕ_R on R, by the usual Frobenius on W and t_i ↦ t_i ^p. Note that ϕ_R depends on the choice of _R. Define M = M _0⊗ _W R. Define the Frobenius ϕ_M by the composition: M = M_0 ⊗ _W RMM,where u ∈ U^o(R) ⊂ (M) = (M_0 ⊗ _W R) is the tautological section[Write this out explicitly].Define ^1 M = ^1 M_0 ⊗ _W R. At this point we have obtained a triple (M, ^1 M, ϕ_M). In general, there is at most one connection ∇ on M such that ϕ_M is horizontal. If exists, it is integrable and topologically nilpotent. (cf. <cit.> 4.3.1., 4.3.2.)Now Faltings's theory implies that such a connection ∇ exists for (M, ϕ_M), and the resulting quadruple (M, ^1 M, ϕ_M, ∇) corresponds to a p-divisible group X _R over R[In general there is an equivalence from the category of such quadruples to the category of p-divisible groups over R, cf. <cit.> 4.1.], such that X_R ⊗ _R R/(t_1,⋯, t_n) is canonically isomorphic to X_0,W and such that X_R is a versal deformation of X_0. See <cit.> 4.5 for details.Before we proceed, let's recall some terminology about connections mentioned in the above paragraph, and fix some notations: * a connection ∇ on M means a map ∇: M → M⊗_R Ω̂^1_R satisfying the usual linearity and Leibniz conditions. Here Ω̂^1_R : = _eΩ^1_R/p^e is the "module of p-adically continuous 1-differentials" on R. For our specific R, we have Ω̂_R^1 = Ω_R/W ^1 ≅ W[[t]] dt_1 ⊕⋯⊕ W[[t]] d t_n. To check this, it suffices to check that Ω^1_R/p^e = Ω^1_(R/p^e) / (W/p^e). Since k = W/pW is perfect, W/p^e as an abelian group is generated by elements of the form w^p^e, w∈ W/p^e. It follows that for any w ∈ W/p^e, we have dw = 0 in Ω_R/p^e. This proves the claim that Ω̂_R = Ω_R/W. * If ∇ is a connection on M as above, write Θ_i : = ∇_∂/∂ t_i : M → M. Thus we have∇(m) = ∑_i=1 ^n Θ_i (m) dt_i,  ∀ m ∈ M.Integrability of ∇ is equivalent to the condition Θ_i Θ_j = Θ_j Θ_i,  ∀ i,j. Topological nilpotence of ∇ is equivalent to the condition that for each 1≤ i ≤ n and each m ∈ M, there exists N∈ such that Θ_i^N' (m) ∈ pM, for all ∋ N' ≥ N. * Suppose ∇ is integrable and topological nilpotent. Define Θ_i as in the previous item. For a multi-index K= (K_1,⋯, K_n) ∈_≥ 0 ^⊕ n, write Θ_K: = Θ_1^K_1⋯Θ_n^K_n. Define the differential-equation-solving map: ξ: M_ 0 → M_0 ⊗ _W W[1/p] [[ t_1,⋯, t_n]]by the formula ξ (m) : = ∑_K∈_≥ 0 ^⊕ n Θ_K (m) (-t) ^K/K!.Now we claim that ξ sends m_0 ∈ M_0 to the unique element in the RHS which is horizontal under ∇ (where we use _R to view R as a subring of W[1/p] [[t_1,⋯, t_n]]), and which specializes to m_0 under t_i ↦ 0. In fact, that ξ(m_0) is horizontal follows from direct computation, and the uniqueness statement follows from the observation that ξ sends a W-basis of M_0 to a W[1/p] [[ t_1,⋯, t_n]]-basis of the RHS, which is true because the determinant of these vectors is an element in W[1/p] [[ t_1,⋯, t_n]] whose constant term is not zero in W[1/p]. Moreover, by the formula for ξ we see that ξ has image in M_0 ⊗ _W R', where R' is the subring of W[1/p] [[t]] defined as R' = ∑_K a_K t ^K | a_K ∈ W[1/p], K! a_K ∈ W. * Let ∇ be an arbitrary connection on M. Define Θ_i as before. We now explain what it means to say that ϕ_M is horizontal with respect to ∇. Write M̃: = M + p^-1^1 M. Thus ϕ_M induces an R-isomorphism ϕ_M : ϕ_R ^* M̃ M. We will write a typical element in ϕ_R^* M̃ = M̃⊗ _R, ϕ_R R as m⊗ r, with m∈M̃ and r ∈ R. The connection ∇ induces a connection ∇̃ on ϕ_R^* M̃, characterized by ϕ_R^* M̃⊗_R Ω̂_R^1 ∋∇̃( m⊗ 1) = ∑_i (Θ_i(m) ⊗ 1) ⊗ d (ϕ_R t_i) = ∑_i (Θ_i(m) ⊗ 1) ⊗ p t_i^p-1d t_i.The horizontality of ϕ_M is the requirement that under the isomorphism ϕ_M : ϕ_R^* M̃ M, the connections ∇̃ and ∇ on the two sides correspond. We choose an isomorphism of W-algebras _R_G: R_GW[[τ_1,⋯, τ_r]]and get a Frobenius ϕ_R_G on R_G similarly as in the discussion for R. We assume that _R and _R_G are chosen in such a way that the natural map ι: R → R_G is compatible with ϕ_R and ϕ_R_G. The quadruple (M, ^1 M , ϕ_M, ∇) specializes along ι: R → R_G to a quadruple (M_R_G, ^1 M_R_G, ϕ_M_R_G, ∇_M_R_G) over R_G in the naive way, where in particular ϕ_M_R_G is defined to be the compositionϕ_R_G ^* M_R_G = ϕ_R_G ^* ι^* M ≅ι^* ϕ_R^* M ι^* M = M_R_G .We remark that in general, without assuming that ιϕ_R_G = ϕ_R ι, we can still define the specialization of (M, ^1 M, ϕ_M,∇) along ι, but in the definition of ϕ_M_R_G we need to replace the canonical isomorphism ϕ_R_G ^* ι^* M ≅ι^* ϕ_R^* M in the above with an isomorphism whose definition uses ∇, cf. <cit.> P 143, or <cit.> 4.3, <cit.> P 16. In any case, the quadruple (M_R_G, ^1 M_R_G, ϕ_M_R_G, ∇_M_R_G) over R_G corresponds to the p-divisible group ι^* X_R = : X_R_G over R_G. Morally the pair (R_G, X_R_G) is a versal deformation of X_0 viewed as a p-divisible with addition G-structure. Note that there are mistakes in <cit.> 1.5.2 - 1.5.4, which are corrected in <cit.> Erratum. By the corrected statement in <cit.> 1.5.4 (cf. <cit.> E.1), ∇_M_R_G has coefficients in G. In particular, the tensors (ι^* (s_α⊗ 1)) on M_R_G are parallel with respect to ∇_M_R_G. Consider an arbitrary W-algebra homomorphism x: R → W. Consider X_x : = x^* X_R, which is a p-divisible group over W together with a canonical isomorphism X_x ⊗ _W k ≅ X_0. By Grothendieck-Messing theory, X_x corresponds to a filtration^1_x M_0 ⊂ M_0 = 𝔻(X_0) (W) lifting the filtration in M_0 ⊗ _W k. We describe ^1_x M_0 as follows: Recall that the differential-equation-solving map: ξ: M_ 0 → M_0 ⊗ _W W[1/p] [[ t_1,⋯, t_n]]has image in M_0 ⊗ _W R'. Since x : R= W[[t_1,⋯, t_n]] → W necessarily sends each t_i into the ideal pW, which has divided powers, we can compose to get a map(1⊗ x)∘ξ : M_0 → M_0,which we call the parallel transport map from t_i= 0 to t_i= x(t_i), denoted by g_x. We know that g_x is a W-module automorphism of M_0. We have ^1_x M_0 = g_x^-1^1 M_0. Consider the following morphism in the cristalline site of R/p: R = W[[t_1,⋯, t_n]] → R,  t_i ↦ t_i + x(t_i). This morphism together with the crystal structure of 𝔻(X_R/p) defines an identification j_x: M= 𝔻(X_R/p) (R) 𝔻(X_R/p) (R) = M,which we know is determined by parallel transport on M.The lemma now follows from the fact that the identification of𝔻 (X_0) (W)= 𝔻(X_R) (R)⊗ _R, t_i↦ 0 W = M_R ⊗ _R, t_i↦ 0 W = M_0with𝔻 (X_x) (W) = 𝔻(X_R)(R) ⊗_R,x W= M_R ⊗_R ,x W = M_0 ⊗ _W R ⊗ _R,x W = M_0,given by the natural identification 𝔻(X_0) (W) 𝔻(X_x) (W), is equal to j_x ⊗_R, t_i↦ 0 W. Suppose x : R→ W is a W-algebra homomorphism that factors through ι : R → R_G. Then _x^1 M_0 = g_x^-1^1 M_0, for some g_x ∈ G(W) ⊂(M_0).Since (ι^*( s_α⊗ 1)) are parallel under ∇_M_R_G = ι^* ∇, we know that the parallel transport map g_x discussed above fixes s_α. But that means g_x ∈ G(W).§.§ The analogue of a result of Madapusi Pera on special cycles Letbe an arbitrary [1/2]-algebra. Assumeis local. Let 𝐋 be a finite free -module equipped with the structure of a self-dual quadratic space over . By an isotropic line in 𝐋 we mean a direct summand of rank one on which the quadratic form is zero.We start with a general lemma on Clifford algebras. Letand 𝐋 be as in Definition <ref>. Let C(𝐋) be the associated Clifford algebra. Let ξ∈𝐋 be an -generator of an isotropic line. Let (ξ) be the kernel of the endomorphism of C(𝐋) given by left multiplication by ξ. Then for any v∈𝐋, left multiplication by v preserves (ξ) if and only if v is orthogonal to ξ. Assume v is orthogonal to ξ. Then vξ= -ξ v, so v preserves(ξ). Conversely, assume v preserves (ξ). Write q for the quadratic form and , the corresponding bilinear pairing. Since ξ is a direct summand of 𝐋, there exists an -module homomorphism 𝐋→ sending ξ to 1. Since 𝐋 is self-dual, we know that there exists ζ∈𝐋 representing such a homomorphism. Namely we haveζ, ξ = 1. It immediately follows that we have an -module direct sum 𝐋 = ξ^⊥⊕ζ. Replacing ζ by ζ - q(ζ)/2ξ, we may arrange that ζ is isotropic.We have q(ζ +ξ) = 2 ζ, ξ = 2,and in C(𝐋) we have q(ζ +ξ) = ζξ + ξζ.Hence in C(𝐋) we haveξζ + ζξ =2 Writev = v_1 + λζ,with v_1 ∈ξ^⊥ and λ∈. By the first part of the proof we know that v_1 preserves (ξ). Therefore λζ preserves (ξ). Note that ξ∈ (ξ) as ξ is isotropic. It follows that, in C(𝐋), 0 λζ (ξ) ξ (λζ) ξ(<ref>)λ (2-ζξ) ξξ 2λξ.This is possible only when λ =0, and hence we have v= v_1 ∈ξ^⊥. The next result is a Rapoport–Zink space analogue of <cit.> which is in the context of special cycles onShimura varieties. We only state a weaker analogue as it is sufficient for our need. The proof builds on loc. cit. too. We first introduce some definitions. Denote by y_00 the distinguished k-point ofcorresponding to 𝕏_0 and the identity quasi-isogeny. Let y_0 ∈ (k) be an arbitrary element. Let L be the special lattice corresponding to y_0 under (<ref>).When y_0 = y_00, we have Φ L = V_W (cf. the discussion below (<ref>)). In this case define ^1(Φ L) _k to be the one-dimensional subspace of V_k defined by the cocharacter μ of G_W and the representation G_k →(V_k). For general y_0, let g ∈ X_G, b ,μ^σ (k) be associated to y_0. Then Φ L = g V_W and g induces a map V_k → (Φ L)_k (cf. loc. cit.).Define ^1 (Φ L) _k to be the image of ^1 V_k under the last map.By our explicit choice of μ in <ref>, the submodule _p x_2 in V is of weight 1 with respect to μ, and ⊕_1≤ i ≤ n, i≠ 2_px _i is of weight 0 with respect to μ, so ^1 V_k = k x_2. ^1 (Φ L)_k is the orthogonal complement in (Φ L) _k of (L ∩Φ L) _k. However we will not need this description in the sequel. Let 𝒞 be the category defined as follows: * Objects in 𝒞 are triples (, → k, δ), whereis a local artinian W-algebra, → k is a W-algebra map, and δ is a nilpotent divided power structure on (→ k). * Morphisms in 𝒞 are W-algebra maps that are compatible with the structure maps to k and the divided power structures.In the following we will abuse notation to writefor an object in 𝒞. Let y_0 ∈(k) be an arbitrary element. Let 𝐯 be as in Definition <ref> such that the special cycle 𝒵: = 𝒵 (𝐯) contains y_0. In particular 𝐯⊂ L∩Φ L by Remark <ref>. Let _y_0 and 𝒵_y_0 be the formal completions ofand 𝒵 at y_0 respectively. For any ∈𝒞 there is a bijection f_: _y_0 ()such that the following properties hold. Here we equip (Φ L)_ with the -bilinear form obtained by extension of scalars of the W-bilinear form on Φ L. * f_ is functorial in ∈𝒞 in the following sense. Let ' ∈𝒞 be another object of 𝒞 and let ϕ : →' be a morphism in 𝒞. Then we have a commuting diagram. _y_0 () [d]^f_[r]_y_0 (') [d]^f_'(f_ ) [r] (f_')Here the top horizontal map is the natural map induced by ϕ, and the bottom horizontal map is given by base change along ϕ. * f_ restricts to a bijection f_, 𝐯:𝒵_y_0 () The existence and construction of the bijection f_ and the property (1) are consequences of <cit.> and the global construction ofin <cit.> using the integral model of theShimura variety. We explain this more precisely below. Consider = _U_p U^p,the canonical integral model over _(p) of the Shimura variety associated to theShimura datum associated to a quadratic space V_ over , at a suitable level U^p away from p and a hyperspecial level U_p at p. See <cit.> or <cit.> for more details on this concept. By <cit.>, we may assume that the following package of data: * the Shimura datum associated to V_, * the Kuga-Satake Hodge embedding (cf. <cit.>) of the Shimura datum into aShimura datum, * the chosen hyperspecial level at p, * an element x_00∈_U^p_U^p U_p (k), induces, in the fashion of <cit.>, the local unramified Shimura-Hodge datum that we used to define . Letbe the formal scheme over _p obtained from p-adic completion of , and let _W be the base change to W of . Then as in <cit.>, we have a morphism of formal schemes over W: Θ: →_W. We know that Θ maps y_00 to the k-point of _W induced by x_00.Moreover, letx_0: = Θ(y_0) ∈ _W (k) = (k)and let U be the formal completion ofat x_0 (or, what amounts to the same thing, the formal completion of _W at x_0). By the construction ofin <cit.>, we know that Θ induces an isomorphism _y_0U. In <cit.>, two crystals 𝐇_, 𝐋_ are constructed on _k. (In fact <cit.> works over _p, but we always base change from _p to k.) Here 𝐇_ is by definition the first relative crystalline cohomology of the Kuga-Satake abelian scheme over _k in the sense of loc. cit.[See footnote <ref>.]The specialization of 𝐇_ over k via x_00 is identified with the Dieudonné module C_W, which is the covariant Diedonné module of the p-divisible group 𝕏_0 considered in this article (and <cit.>) and the contravariant Diedonné module of the Kuga-Satake abelian variety at x_00 considered in <cit.>.[Due to different conventions, the Kuga-Satake abelian scheme (and p-divisible group) considered by Madapusi Pera in <cit.> is different from that considered by Howard-Pappas in <cit.>. In fact they are dual to each other.]Moreover, the embedding V ↪__p (C) has a cristalline realization, which is a sub-crystal 𝐋_ of (𝐇_). For details see <cit.>. Among others, 𝐋_ has the following structures: * Its specialization 𝐋_, x_0 to any x_0 ∈ (k), viewed as a W-module, has the structure of a W-quadratic space. * 𝐋_, x_0⊗_W k contains a canonical isotropic line ^1 (𝐋_, x_0⊗_W k). By the definition of Θ and the definition of the parametrization of (k) by the affine Deligne-Lusztig set (cf. <cit.>), we know that when y_0 ∈ (k) corresponds to the special lattice L under (<ref>), the following statements are true: *There is an isomorphism of Dieudonné modules (gC)_W 𝐇_, x_0. *There is a W-linear isometry (Φ L)_W 𝐋_,x_0 under which ^1 (Φ L)_k is identified with^1 (𝐋_, x_0⊗_W k). * We have a commutative diagram: Φ L@^(->[r][d]_W ((gC)_W) [d]𝐋_, x_0@^(->[r] _W(𝐇_, x_0),where * the right vertical map is induced by the map in <ref>. * the left vertical map is the map in <ref>. * the bottom horizontal map arises from the fact that 𝐋_ is a sub-crystal of (𝐇_).In the rest of the proof we make the identifications in <ref> and <ref> above and omit them from the notation. Abbreviate 𝐇: = 𝐇_ ,x_0 and 𝐋 := 𝐋_, x_0.Now in <cit.> Madapusi Pera constructs a bijection U() .Moreover by the construction given in loc. cit. the above bijection is functorial in ∈𝒞. We define f_ as the above bijection precomposed with the isomorphism Θ: _y_0 U. It remains to prove property (2). Note that 𝐇 = gC_W is the covariant Dieudonné module of the p-divisible group X_y_0 over k determined by y_0∈(k). Given y ∈ _y_0 () lifting y_0, by Grothendieck-Messing theory (for covariant Dieudonné modules) we know that y ∈𝒵_y_0 if and only if the image of 𝐯 in _ (𝐇_) stabilizes ^1 𝐇_⊂𝐇_, where ^1 𝐇_ is the Hodge filtration corresponding to the deformation from k toof the X_y_0 determined by y. Now, as is stated in the proof of <cit.>[Madapusi Pera defines ^1 𝐇_ using the contravariant Grothendieck-Messing theory of the p-divisible group of the Kuga-Satake abelian scheme in his sense, which is the same as the covariant Grothendieck-Messing theory of the p-divisible group over U transported via Θ from the universal p-divisible group overin the sense of Howard-Pappas.], we know that ^1 𝐇_ is the kernel in 𝐇_ of any -generator ξ of the isotropic line f_(y). Here ξ∈𝐋_ is viewed as an element of _(𝐇_). By Lemma <ref>, 𝐯 preserves ^1 𝐇_ = ξ if and only if 𝐯 is orthogonal to ξ (inside 𝐋_). Thus y ∈𝒵_y_0 if and only if f_ (y) is orthogonal to the image of 𝐯 in 𝐋_ = (Φ L) _. Consider the bijection f_, 𝐯 for =k. Since the source of this bijection is non-empty, it follows that ^1 (Φ L)_k is orthogonal to the image of 𝐯 in (Φ L) _k. This observation also follows from the Remark <ref> as 𝐯⊂ L ∩Φ L. §.§ Reducedness of minuscule special cycles Let Λ be a _p-lattice in V_K^Φ with p^i Λ⊂Λ^∨⊂Λ for some i ∈_≥ 1. (Equivalently, Λ^∨ has invariant (r_1,⋯, r_n) such that i≥ r_1≥ r_2≥⋯≥ r_n ≥ 0.) Then the special cycle 𝒵(Λ^∨) defined by Λ ^∨ has no (W/p^i+1)-points. In particular, taking i=1 we see that _Λ (W/p^2) = ∅ for any vertex lattice Λ, or equivalently 𝒵 (𝐯) (W/p^2) = ∅ for any minuscule 𝐯. Suppose there exists x∈𝒵(Λ^∨ ) (W/p^i+1). Let x_0 ∈𝒵(Λ^∨ ) (k) be induced by x under the reduction map W/p^i+1→ W/p =k. Under (<ref>) x_0 determines a special lattice L. By Remark <ref>, Λ ^∨_W ⊂ L ∩Φ L. Note that W/p^i+1→ k is a surjection whose kernel admits nilpotent divided powers. By Theorem <ref>, the existence of the lift x of x_0 implies that there exists an isotropic line ℒ(over W/p^i+1) in (Φ L)_ W/p^i+1 lifting ^1 (Φ L)_k and such that ℒ is orthogonal to the image of Λ^∨ in (Φ L)_W/p^i+1. Let l ∈Φ L be a lift of a generator of ℒ. Then l,λ∈ p^i+1 W for all λ∈Λ^∨. It follows that p^-(i+1) l ∈Λ_W. Hence p^-1l ∈ p^i Λ_W ⊂ (Λ^∨)_W ⊂Φ L, i.e. l ∈ p Φ L. This contradicts with the fact that ℒ maps to a non-zero line in (Φ L)_k. §.§.§ Let u ∈ V_K^Φ -0. Suppose x_0 ∈𝒵(u)(k). Let T= k[ϵ]/ϵ^2 be the ring of dual numbers over k. We equip T with the map T→ k, ϵ↦ 0, which has its kernel (ϵ) admitting nilpotent divided powers (in a unique way). Thus Theorem <ref> can be applied to =T. Let 𝒯_x_0_k and 𝒯_x_0𝒵(u)_k be the tangent spaces at x_0 to _k = ×_ Wk and to 𝒵(u) _k = 𝒵(u) × _ W k respectively. We will always take the point of view that 𝒯_x_0_k is the preimage of x_0 under the reduction map (T) →(k). Similarly for 𝒯_x_0𝒵(u) _k. We compute 𝒯_x_0_k and 𝒯_x_0𝒵(u)_k explicitly in the following. The result is given in Corollary <ref>. Let L be the special lattice associated to x_0 under (<ref>). Since x_0 ∈𝒵(u) (k), we have u ∈ L ∩Φ L by Remark <ref>. Let u̅ be the image of u in (Φ L)_k. Let ^1 (Φ L)_k be as in Definition <ref>. By Remark <ref> we know that u̅ is orthogonal to ^1 (Φ L)_k. Define 𝒟 to be the set of isotropic lines in (Φ L)_T lifting ^1 (Φ L)_k. Define 𝒟_u to be the subset of 𝒟 consisting of lines which are in addition orthogonal to the image of u in (Φ L)_T. Let 𝒢= f_T: 𝒯_x_0_k 𝒟.be the bijection given in Theorem <ref>. By the same theorem it restrict to a bijection 𝒯_x_0𝒵(u)_k 𝒟_u. We identify (Φ L)_T with (Φ L)_k ⊗ _k T. Fix a k-generator v_0 of ^1 (Φ L)_k.Define a mapℱ̃: (Φ L)_k→ (Φ L)_T w↦span_Tv_0 ⊗_k 1 + w⊗_k ϵ . ℱ̃ factors through (Φ L)_k / ^1 (Φ L )_k, and its image consists of T-module direct summands of (Φ L)_T of rank one. For any λ∈ k, we have v_0 ⊗ 1 + (w+λ v_0)⊗ϵ = (1+ λϵ ) (v_0 ⊗ 1 + w⊗ϵ),and 1+λϵ∈ T^×. Hence ℱ̃ factors through (Φ L)_k / ^1 (Φ L) _k. For any w∈ (Φ L)_k, we know that ℱ̃ (w) is a free module of rank one by definition. It remains to show that ℱ̃ (w) is a direct summand of (Φ L)_T. Let A be a k-vector space complement of ^1(Φ L) _k inside (Φ L)_k. We easily check that the following T-submodule of (Φ L)_T is a T-module complement of ℱ̃(w): span_T v' ⊗ 1 + w⊗ϵ | v' ∈ A. The map ℱ̃ induces a bijection of sets: ℱ: (^1 (Φ L)_k)^⊥ / ^1 (Φ L)_k𝒟.Moreover, ℱ restricts to a bijectionu̅,^1 (Φ L)_k ^⊥ /^1 (Φ L)_ k 𝒟_u. Since v_0,v_0 = 0 ∈ k, the condition that ℱ̃(w) is isotropic is equivalent to w, v_0 = 0 ∈ k. Since v_0 is orthogonal to u̅, the condition that ℱ̃(w) is orthogonal to the image of u in (Φ L)_T is equivalent to w , u̅ = 0 ∈ k. Let 𝒢 be as in (<ref>) and let ℱ be as in Corollary <ref>. The map𝒢^-1∘ℱ: (^1 (Φ L)_k)^⊥/^1 (Φ L)_k →𝒯_x_0_k is k-linear. The proof is a routine check, using the functorial property stated in Theorem <ref>. We first recall the k-vector space structure on 𝒯_x_0_k, from the point of view that 𝒯_x_0_k is the preimage of x_0 under the map (T) →(k). Scalar multiplication: Given a tangent vector v ∈𝒯_x_0_k corresponding to v_T ∈(T) and given a scalar λ∈ k, the tangent vector λ v corresponds to the following element (λ v)_T of (T): the image of v_T under (T) (T). We see that (λ v)_T is indeed a preimage of x_0.Addition: Let v_1, v_2∈𝒯_x_0_k be two tangent vectors. Let T_i = k[ϵ_i]/ϵ _i ^2, i=1,2 be two copies of T. We represent v_i as an element (v_i)_T_i in (T_i) that reduces to x_0 ∈(k), for i=1,2. Let T̃ be the fiber product of T_1 and T_2 over k, in the category of k-algebras. Namely, T̃ = k[ϵ_1,ϵ_2]/(ϵ_1,ϵ_2) ^2. Let δ be the k-algebra map δ : T̃→ T,  ϵ_1↦ϵ, ϵ_2 ↦ϵ.By the fact that T̃ is the fiber product of T_1 and T_2, there is a canonical bijection(T_1) ×(T_2) (T̃).Denote by v_1 +̃ v_2 the image of ((v_1)_T_1, (v_2) _T_2 ) in (T̃ ) under the above bijection. Then the tangent vector v_1+v_2 corresponds to the following element (v_1+v_2)_T of (T): the image of v_1 +̃ v_2 under δ_*: (T̃) →(T). This last element is indeed a preimage of x_0. We now check that 𝒢^-1∘ℱ is k-linear. We first check the compatibility with scalar multiplication. For any λ∈ k and w∈(^1 (Φ L)_k)^⊥, we have ℱ̃ (w) = span_Tv_0 ⊗ 1 + w⊗ _k ϵ and ℱ̃ (λ w) = span_Tv_0 ⊗ 1 + λ w⊗ _k ϵ. Let m_λ denote the map T→ T, ϵ↦λϵ. Then we have ℱ̃ (w) ⊗ _T,m_λ T = ℱ̃ (λ w) as submodules of (Φ L)_T. By the functoriality instated in Theorem <ref>, we know that for all d∈𝒟, the element 𝒢 ^-1 (d⊗ _T,m_λT )∈(T) is equal to the image of 𝒢^-1 (d) under (T) (T). It follows that (𝒢^-1∘ℱ)(λ w) is equal to λ times the tangent vector (𝒢^-1∘ℱ)( w).We are left to check the additivity of 𝒢^-1∘ℱ. Let w_1, w_2 ∈ (^1 (Φ L)_k)^⊥. Let 𝒟_i, ℱ_i,𝒢_i be the analogues of 𝒟, ℱ, 𝒢 respectively with T replaced by T_i, for i=1,2. Also let f_T̃ be as in Theorem <ref> (with = T̃, where (T̃→ k) is equipped with the unique nilpotent divided power structure.) Let d_i : = ℱ_i (w_i),   i =1,2. Then d_i = span_T_i (v_0⊗ 1 + w_i ⊗ϵ_i). We easily see that the assertion (𝒢^-1∘ℱ) (w_1 +w_2) = (𝒢^-1∘ℱ)(w_1) + (𝒢^-1∘ℱ)(w_2) follows from the following claim:Claim. Under (<ref>), the element(𝒢_1^-1 (d_1),𝒢_2^-1 (d_2) )is sent to the elementf_T̃^-1 (span_T̃v_0⊗ 1 + w_1⊗ϵ_1 + w_2⊗ϵ_2).We now prove the claim. Let d̃ be such that the element (𝒢_1^-1 (d_1),𝒢_2^-1 (d_2) ) is sent under (<ref>) to f_T̃^-1 (d̃).Thus d̃ is an isotropic line in (Φ L) _T̃. By the functoriality stated in Theorem <ref> and the functorial definition of (<ref>), we see that d̃ is characterized by the condition that d̃⊗ _T̃ T_i = d_i,   i = 1,2, where the tensor product is with respect to the the structure map T̃→ T_i expressing T̃ as the fiber product of T_1,T_2 (i.e. reduction modulo ϵ_j for j ≠ i). Using this characterization of d̃, we see that d̃ is as predicted in the claim. The tangent space 𝒯_x_0_k is isomorphic to(^1 (Φ L)_k)^⊥/^1 (Φ L)_k.Under this isomorphism, the subspace 𝒯_x_0𝒵(u)_k of 𝒯_x_0_k is identified with u̅,^1 (Φ L)_k ^⊥ /^1 (Φ L)_ k. This follows from Corollary <ref>, Lemma <ref>, and the bijectivity of 𝒢^-1 asserted in Theorem <ref>. Let Λ⊂ V^Φ _K be a vertex lattice. Let L be a self-dual W-lattice in V_K such that Λ^∨_W ⊂ L ⊂Λ_W. Let A be the image of Λ^∨_W in L_k. Then the following statements hold. * _k Λ_W/ L =_k L /Λ_W^∨. Here both spaces are vector spaces over k because p Λ_W ⊂Λ_W^∨⊂ L and p L ⊂ p Λ_W ⊂Λ_W^∨. * A ⊃ A^⊥. Here A^⊥ is the orthogonal complement of A in L_k. (1) Consider the W-bilinear pairingΛ_W ×Λ_W → W (x,y) ↦ p x,y,where , is the K-bilinear form on V_K^Φ⊗ __p K = V_K. We get an induced k-quadratic space structure on Λ_W/Λ_W^∨. The image of L in Λ_W/Λ_W^∨ is equal to the orthogonal complement of itself, i.e. it is a Lagrangian subspace. Claim (1) follows. (2) By definition A^⊥ is the image in L_k of the W-submodule pΛ_W^∨∨ = p Λ_W of L.We have pΛ_W ⊂Λ_W^∨, so A^⊥ lies in the image of Λ_W^∨ in L_k, which is A. Let Λ⊂ V_K^Φ be a vertex lattice of type t (so t≥ 2 is even). For all x_0 ∈_Λ (k), we have _k 𝒯_x_0_Λ,k= t/2-1. Let L be the special lattice associated to x_0 under (<ref>), and let ^1 (Φ L)_k be as in Definition <ref>. Then Λ^∨ _W ⊂ L ∩Φ L. Denote by A the image of Λ_W^∨ in (Φ L)_k. Then A is orthogonal to ^1 (Φ L)_k by Remark <ref>. By Corollary <ref>, we have an isomorphism of k-vector spaces𝒯_x_0_Λ,k≅A, ^1 (Φ L)_k ^⊥ / ^1 (Φ L)_k . Since A is orthogonal to ^1 (Φ L)_k, we have A ⊃^1 (Φ L)_k by Lemma <ref> applied to the self-dual W-lattice Φ L. Therefore 𝒯_x_0_Λ,k≅ A^⊥ / ^1 (Φ L)_k. Since the bilinear pairing on (Φ L)_k is non-degenerate, we have _k 𝒯_x_0_Λ,k = _k (Φ L)_k - _k A - 1 = _k (Φ L/Λ_W^∨) -1. By claim (1) in Lemma <ref> (applied to Φ L), we have _k (Φ L/Λ_W^∨) = t/2. Let Λ⊂ V^Φ _K be a vertex lattice. The formal scheme _Λ×_ W k is regular.Let t be the type of Λ. Denote X: = _Λ ^red and Y: = _Λ×_ W k. Then X is a formal subscheme of Y over k. Recall from <ref> that X is a smooth k-scheme of dimension t/2-1. It follows that for all x_0 ∈ Y(k), the complete local ring of Y at x_0 is of dimension ≥ t/2-1. By Proposition <ref>, the tangent space of Y at x_0 has k-dimension equal to t/2 -1. Hence Y is regular at x_0. Let Λ⊂ V^Φ _K be a vertex lattice. Then _Λ = _Λ^red and is of characteristic p._Λ does not admit W/p^2-points (Proposition <ref>) and its special fiber is regular (Corollary <ref>). It follows from <cit.> that _Λ is equal to its special fiber. Being regular itself, _Λ is reduced. § THE INTERSECTION LENGTH FORMULA §.§ The arithmetic intersection as a fixed point schemeRecall from <ref> that we are interested in computing the intersection of ^g and δ(^♭), for g∈ J_b (_p). Assume g∈ J_b(ℚ_p) is regular semisimple. Then δ(^♭)∩^g is contained in 𝒵(𝐯(g)), where 𝐯 (g) = ( x_n, gx_n,⋯, g^n-1 x_n). By Lemma <ref>, we have δ(^♭)⊆𝒵(x_n). Hence δ(^♭)∩^g⊆𝒵(x_n)∩^g⊆𝒵(gx_n) by the definition of special cycles. Repeating this procedure we obtainδ(^♭)∩^g⊆𝒵(x_n)∩𝒵(gx_n)∩⋯∩𝒵(g^n-1x_n)=𝒵(𝐯(g)). Assume g∈ J_b(_p) is regular semi-simple and minuscule. Thenδ(^♭)∩^g ⊂ _L(g) ^∨ = _L(g) ^∨ ^red.In particular, δ(^♭)∩^g is a scheme of characteristic p. The first statement is an immediate consequence of Remark <ref>, Theorem <ref>, and Proposition <ref>. Now both δ (^♭) and ^g are closed formal subschemes of , so δ (^♭)∩^g is a closed formal subscheme of the scheme _L(g) ^∨ = _L(g) ^∨ ^red of characteristic p. Hence δ (^♭)∩^g is its self a scheme of characteristic p. §.§.§ In the rest of this section we will fix g ∈ J_b(_p) regular semisimple and minuscule, and assume ^g ≠∅. Take Λ:=L(g)^∨. Then Λ is a vertex lattice stable under g, cf. Remark <ref>. We are interested in computing the intersection length of δ(^♭) and ^g around a k-point of intersection. Recall the isomorphism (<ref>) between p^\_Λ ^red (which we now know is just p^\_Λ) and S_Λ. Recall from <ref> that S_Λ is a projective smooth variety over k of dimension t_Λ /2 -1. We write d = t_Λ/2. Let Ω_0 = Λ / Λ^∨ and Ω = Ω_0 ⊗ __pk = Λ_W / Λ_W^∨. Let , be the k-bilinear form on Ω (cf. <ref>). Let 𝔾 = (Ω) , 𝔾_0 = (Ω_0). Let g̅ be the induced action of g on Ω. Then g̅∈𝔾_0(_p) ⊂𝔾(k). Recall thatS_Λ (k) = ℒ⊂Ω,  (ℒ + Φℒ) =d+1= (ℒ_d-1 ,ℒ_d) | ℒ_d ⊂Ωℒ_d-1 = d-1, ℒ_d-1⊂ℒ_d ∩Φℒ_d . There is a natural action of g̅ on S_Λ via its action on Ω. On R-points g̅ sends (ℒ_d-1, _d) to (g̅_d-1, g̅_d). The latter is indeed a point of S_Λ because g̅Φ = Φg̅ by the fact that g̅∈𝔾_0 (_p). The following proposition allows us to reduce the study of intersection multiplicities to the study of the non-reduced structure of S_Λ ^g̅. p^ℤ\(δ (^♭)∩^g)≅ S_Λ^g̅.In view of Theorem <ref>, Corollary <ref> and the observation that the isomorphism (<ref>) induces an isomorphism p^\ (_Λ^red )^gS_Λ ^g̅, it suffices to show (p^\_Λ^red ) ∩( p^\δ(^♭)) = (p^\_Λ^red ).Since both p^\_Λ^red and p^\δ(^♭) are closed formal subschemes of p^\ and since p^\_Λ^red is a reduced scheme, it suffices to check thatp^\_Λ ^red (k) ⊂ p^\δ (^♭) (k).Now the left hand side consists of special lattices L containing Λ^∨, and the right hand side consists of special lattices L containing x_n (cf. (<ref>) and Lemma <ref>). We finish the proof by noting that by definition x_n ∈Λ^∨ = L(g). Proposition <ref> reduces the intersection problem to the study of S_Λ^g̅.§.§ Study of S_Λ^g̅ We continue to use the notation in <ref>.We adopt the following notation from <cit.>. Let (d-1) (resp. (d)) be the moduli space of totally isotropic subspaces of Ω of dimension d-1 (resp. d). For a finite dimensional vector space W over k and an integer l with 0≤ l ≤ W, we write (W,l) for the Grassmannian classifying l-dimensional subspaces of W. Thus for j ∈d-1, d and any k-algebra R, we have (j) (R) = . Also(W,l) (R) = . Let (d-1, d) ⊂(d-1) ×(d) be the subscheme defined by the incidence relation. Namely, it is the locus where the (d-1)-dimensional isotropic subspace is contained in the d-dimensional isotropic subspace. Recall from loc. cit. that (d-1,d) has two connected components (d-1, d) ^±, and the projection to the first factor (d-1, d) →(d-1) restricts to isomorphisms(d-1, d)^+ (d-1) (d-1, d)^- (d-1).Also recall from loc. cit. that the natural morphism S_Λ↪(d-1, d) is a closed embedding, and S_Λ^± : = S_Λ∩(d-1, d) ^± give the two connected components of S_Λ. In the following, we denote by S_Λ^0 Following the notation of <cit.>, we have parabolic subgroups P_0 , P^+, P^- of 𝔾. Here P_0 is the stabilizer of the standard (d-1)-dimensional isotropic subspace e_1,⋯, e_d-1, and P^+ (resp. P^-) is the stabilizers of the standard d-dimensional isotropic subspace e_1,⋯, e_d (resp. e_1,⋯, e_d-1, f_d). We have P_0= P^+ ∩ P^-. The Frobenius Φ interchanges P^± and stabilizes P_0. Let (d-1) (resp. (d)) be the moduli space of isotropic subspaces of Ω of dimension d-1 (resp. d). Using e_1,⋯, e_d-1 one identifies (d-1) with 𝔾/P_0. Using e_1,⋯, e_d (resp. e_1,⋯, e_d-1, f_d) one identifies (d) with 𝔾/P^+ (resp. 𝔾/P^-).Consideri^+ =(π ^+ ,π^-) : 𝔾/ P_0 →𝔾/P^+ ×_k 𝔾/P^- g↦ (g,g)which is a closed embedding. We let Γ^+_Φ⊂𝔾/P^+ ×𝔾/P^- be the graph of Φ: 𝔾/ P^+ →𝔾/ P^-. Then by loc. cit. we haveS_Λ ^+(i^+) ^-1Γ_Φ ^+ ⊂𝔾/P_0S_Λ∋ℒ = h e_1,⋯, e_d hUnder our identifications 𝔾/P_0 = (d-1) and 𝔾/P^± =(d), the morphism i^+: /P_0 →𝔾/P^+ ×_k 𝔾/P^- is equivalent to the morphism (d-1) → (d) ×_k (d) which sends ℒ_d-1∈ (d-1) to the pair (ℒ_d^+ ,ℒ_d^-), such that the two Lagrangians ℒ_d^+, ℒ_d^- are the (unique two) Lagrangians containing ℒ_d, and such that there exists g∈𝔾 such that g e_1,⋯, e_d = ℒ_d^+, ge_1,⋯, e_d-1 , f_d = ℒ_d^-. (A priori exactly one of the two possible orderings (ℒ_d^+ ,ℒ_d^-) and (ℒ_d^- ,ℒ_d^+) satisfies the last condition.) See <cit.> for more details.Suppose x_0 ∈ S_Λ ^+ (k) corresponds to (ℒ_d-1, ℒ_d). We have𝒯 _x_0 S_Λ ^+ = 𝒯_e( Stab _Φℒ_d𝔾 / Stab _ℒ_d-1𝔾). Suppose in addition that x_0 is fixed by g̅. Then g̅∈Stab _ℒ_d-1𝔾. We have𝒯_x_0 ( S_Λ ^+)^g̅ = 𝒯_e (Stab _Φℒ_d𝔾 / Stab _ℒ_d-1𝔾)^g̅,where the action of g̅ on Stab _Φℒ_d𝔾 / Stab _ℒ_d-1𝔾 is through left multiplication or conjugation, which are the same. Through (<ref>) we regard S_Λ ^+ as a sub-variety of 𝔾/ P_0, and write points of it as [h], h∈𝔾. Assume x_0 = [h_0] ∈ S_Λ^+ (k), where h_0 ∈𝔾(k). We identify the tangent space of 𝔾/P_0 at [h_0] with 𝔾 /P_0 using left multiplication by h_0. We compute 𝒯_x_0 S_Λ^+ as a subspace of 𝔾/ P_0. Because the tangent map of Φ: 𝔾/ P^+ →𝔾/ P^- is zero, by (<ref>) we have𝒯_x_0 S_Λ ^+ = X∈𝔾/ P_0|  Im (X) = 0 ∈𝔾/P^- =P^- /P_0. Now suppose that x_0 is fixed by g̅. Then we have h_0^-1g̅ h_0 =: p_0 ∈ P_0(k). Under (<ref>), the tangent action of g̅ on the 𝒯_x_0 S_Λ ^+ corresponds to the action of (p_0) on P^- /P_0. Consequently the tangent space of (S_Λ ^+)^g̅ at x_0 is 𝒯_x_0 (S^+_Λ) ^g̅ = (P^-/P_0)^ p_0. Now by the identifications we have made, the formulas (<ref>) and (<ref>) are respectively equivalent to (<ref>) and (<ref>). We now study the right hand sides of (<ref>) and (<ref>). Let x_0 = (ℒ_d-1 ,ℒ_d) ∈ S_Λ ^+ (k). The Grassmannian (Φℒ_d , d-1), with the natural action by Stab _Φℒ_d𝔾 and with the distinguished point ℒ_d-1∈ (Φℒ_d , d-1)(k), realizes the quotient Stab _Φℒ_d𝔾 / Stab _ℒ_d-1𝔾. Since Φℒ_d is a Lagrangian subspace of Ω, we know that the algebraic group Stab _Φℒ_d𝔾 acts on (Φℒ_d , d-1) through its Levi quotient group (Φℒ_d). Moreover the isotropic subgroup at ℒ_d-1∈ (Φℒ_d , d-1) is equal to Stab _ℒ_d-1𝔾. Now (Φℒ_d) is a quotient of Stab _Φℒ_d𝔾, and(Φℒ_d , d-1) is a quotient of (Φℒ_d). The lemmas follows, for instance by <cit.>. Let x_0 = (ℒ_d-1 ,ℒ_d) ∈ S_Λ ^+ (k). Then we have an identification 𝒯_x_0S_Λ ^+ ≅ (ℒ_d-1, Φℒ_d / ℒ_d-1).Suppose in addition that x_0∈ S_Λ ^+ (k)^g̅. Then g̅∈𝔾 stabilizes ℒ_d, Φℒ_d, and ℒ_d-1. We have an identification 𝒯_x_0 ( S_Λ ^+ )^g̅≅ (ℒ_d-1, Φℒ_d / ℒ_d-1)^g̅. This follows from Proposition <ref> and Lemma <ref>.Let x_0 = (ℒ_d-1 ,ℒ_d) ∈ S_Λ ^+ (k)^g̅. Let λ, c be as in Definition <ref>. Then the tangent space 𝒯_x_0 (S_Λ ^+) ^g̅ has dimension at most one over k. It has dimension one if and only if c > 1.By Corollary <ref>, we have 𝒯_x_0 ( S_Λ ^+ )^g̅≅ (ℒ_d-1, Φℒ_d / ℒ_d-1)^g̅≅ (ℒ_d-1 ^*) ^λ h^t, whereh: = (g̅ |_ℒ_d-1 )^-1∈(ℒ_d-1).Namely, 𝒯_x_0 ( S_Λ ^+ )^g̅ is the eigenspace of eigenvalue λ^-1 of h^t acting on ℒ_d-1 ^*. Since g̅|_Φℒ_d has the property that to each eigenvalue there is at most one Jordan block, the same holds for g̅|_ℒ_d-1, h, and h^t. Moreover λ^-1 is an eigenvalue of h^t if and only if λ is an eigenvalue of g̅|_ℒ_d-1, if and only if c>1. It follows that the eigenspace of eigenvalue λ^-1 of h^t has dimension at most one, and it has dimension one if and only if c>1.Next we study the local lengths of (S_Λ ^+)^g̅. If A is a finite dimensional k-vector space, we write A for the affine space over k defined by A. Thus for a k-algebra R we have A (R) = A⊗_k R. Let V_1⊂ V_2 ⊂ V_3 be k-vector spaces. Let A be a subspace of (V_1, V_3) and B be a subspace of (V_2, V_3). Write (A× B)^comp for the subspace of A× B consisting of compatible elements, i.e. elements (ϕ,ψ) ∈ A× B such that ψ|_V_1 = ϕ. Write A×^compB for (A× B)^comp. Let ℒ_d,ℳ_d be Lagrangian subspaces of Ω such that Ω = ℒ_d ⊕ℳ_d. we write _ (ℒ_d, ℳ_d) for the space of anti-symmetric k-linear maps ℒ_d →ℳ_d. Here we say ϕ: ℒ_d →ℳ_d is anti-symmetric if the bilinear form _d ×_d → k, (x,y) ↦x, ϕ y is anti-symmetric. §.§.§ Recall that in general, if A is a finite dimensional vector space over k and B is a subspace, then we can construct a Zariski open of the Grassmannian (A, B) as follows. Choose a subspace C of A such that A = B ⊕ C. Then there is an open embedding ι_B,C: _k (B, C)→ (A,B) which we now describe. For any k-algebra R and any R-point ϕ of _k(B,C), we view ϕ as an element of _k(B, C) ⊗ R =_R(B⊗ R, C⊗ R). Then ι_B,C maps ϕ to the R-point of (A,B) corresponding the following R-submodule of A: x+ ϕ(x)| x ∈ B⊗ R.For details see for instance <cit.>. In the following we will think of _k(B,C) as a Zariski open of (A, B), omitting ι_B,C from the notation. Let ℒ_d,ℳ_d be complementary Lagrangian subspaces of Ω over k. Then(d) × _(Ω, d) (_d, ℳ_d) = _ ( ℒ_d ,ℳ_d) .In particular, the k-point _d in (d) has an open neighborhood of the form _ ( ℒ_d ,ℳ_d). Let R be a k-algebra and ϕ an R-point of (_d, ℳ_d). Then the submodule (<ref>) (for B = _d) is Lagrangian if and only if for all x∈ B⊗ R,x+ ϕ (x) , x + ϕ(x) =0.But we have x , x = ϕ (x), ϕ(x) = 0 since _d⊗ R and ℳ_d ⊗ R are both Lagrangian. Hence (<ref>) is Lagrangian if and only if x, ϕ(x) =0 for all x∈ℳ_d ⊗ R. §.§.§ It follows from the assumptions we made on g̅∈𝔾(k) in <ref> that its characteristic polynomial on Ω is equal to its minimal polynomial on Ω (cf. Remark <ref>). In general this property is equivalent to the property that in the Jordan normal form all the Jordan blocks have distinct eigenvalues. From now on we let x_0= (ℒ_d-1 ,ℒ_d) ∈ S_Λ (k) be an element fixed by g̅. Then Φℒ_d ⊂Ω is also stable under g̅. If we identify Ω/ Φℒ_d with (Φℒ_d)^* (the k-vector space dual) using the bilinear form on Ω, the action of g̅ on Ω/ Φℒ_d is equal to the inverse transpose of g̅|_Φℒ_d. It follows that the minimal polynomial (resp. characteristic polynomial) of g̅ on Ω is equal to the minimal polynomial (resp. characteristic polynomial) of g̅|_Φℒ_d times its reciprocal. Hence g̅|_Φℒ_d has equal minimal and characteristic polynomial, too. Let λ be the (nonzero) eigenvalue of g̅ on the one-dimensional Φℒ_d/ℒ_d-1. Let c be the size of the unique Jordan block of eigenvalue λ of g̅|_Φℒ_d. §.§.§ Let x_0 = (ℒ_d-1 ,ℒ_d ) ∈ S_Λ (k)^g̅ as in <ref>. Define Y: = (Φ_d, d-1 ) × _k(d). Let ℐ⊂ Y be the sub-functor defined by the incidence relation, i.e. for a k-algebra Rℐ (R) =(_d-1 ' , _d') ∈ (Φ_d, d-1)(R) × (d) (R)  | _d-1' ⊂'_d. The pair (_d-1 ,_d ) defines a k-point in ℐ, which we again denote by x_0. It is well known that the incidence sub-functor of (Φ_d, d-1) ×(Ω, d) is represented by a closed subscheme, and it follows that ℐ is a closed subscheme of Y.Since x_ 0 = (_d-1, _d) ∈ S_Λ (k) is fixed by g̅, we have a natural action of g̅ on Y, stabilizing ℐ and fixing x_0 ∈ℐ. Letℛ̃: = _ℐ ,x_0,ℛ: = _ℐ^g̅, x_0,𝒮̃: = _S_Λ , x_0,𝒮: = _S_Λ^g̅ , x_0be the local rings at x_0 of ℐ, ℐ^g̅, S_Λ, S_Λ^g̅ respectively. Letℛ̃_p : = ℛ̃ / 𝔪_ℛ̃^p,ℛ_p : = ℛ/𝔪_ℛ^p,𝒮̃_p : = 𝒮̃/𝔪_𝒮̃^p,𝒮_p: = 𝒮/𝔪_𝒮^pbe the above four local rings modulo the p-th powers of their respective maximal ideals.The following lemma expresses the observation that ℐ^g̅ may serve as a model for S_Λ^g̅ locally around x_0. * There is a k-algebra isomorphism ℛ̃ _p ≅𝒮̃ _p, equivariant for the g̅-action on both sides. * There is a k-algebra isomorphism ℛ_p ≅𝒮_p. We first show (1). Let (_d-1' , _d') be the tautological pair over 𝒮̃ _p for the moduli problem S_Λ, and let (_d-1” , _d”) be the tautological pair over ℛ̃ _p for the moduli problem ℐ. Note thatΦ_d' = (Φ_d) ⊗𝒮̃ _pas submodules of Ω⊗_k 𝒮̃ _p because Φ: 𝒮̃ _p →𝒮̃ _p factors through the reduction map 𝒮̃ _p → k. It follows that (_d-1',_d') defines a point in ℐ (𝒮̃_p) lifting x_0 ∈ℐ(k). Similarly,Φ_d” = (Φ_d) ⊗ℛ̃ _pas submodules of Ω⊗_k ℛ̃ _p, and hence (_d-1”,_d”) defines a point in S_Λ (ℛ̃_p) lifting x_0 ∈𝒮_Λ(k). The point in ℐ(𝒮̃_p) and the point in S_Λ (ℛ̃ _p) constructed above give rise to inverse k-algebra isomorphisms between ℛ̃ _p and 𝒮̃ _p, which are obviously g̅-equivariant. (2) follows from (1), since ℛ_p (resp. 𝒮_p) is the quotient ring of ℛ̃_p (resp. 𝒮̃ _p) modulo the ideal generated by elements of the form r - g̅· r with r∈ℛ̃_p (resp. r∈𝒮̃_p). The above proof also shows that _S_Λ ^+, x_0/𝔪_x_0^l is isomorphic to _ℐ , x_0 /𝔪 _x_0 ^l for 1≤ l ≤ p. In particular, S_Λ ^+ and ℐ have isomorphic tangent spaces at x_0. By Lemma <ref>, we see that the tangent space of ℐ at x_0 is isomorphic to ( (_d-1, span_k (w_d)) ×_ (_d,  span_k (w_1,⋯, w_d)))^comp. The last vector space is isomorphic to (_d-1, span_k (w_d)) via projection to the first factor. From this we recover Corollary <ref>.§.§ Study of ℐ^g̅Next we study ℐ^g̅ by choosing certain explicit coordinates on ℐ. Choose a k-basis v_1,⋯, v_d, w_1,⋯ , w_d of Ω, such that * _d-1 is spanned by v_1,⋯, v_d-1. * _d is spanned by v_1,⋯, v_d. * Φ_d is spanned by v_1,⋯, v_d-1 , w_d. * v_i, v_j = w_i , w_j = 0, v_i,w_j = δ_ij.We will denotev̂_i : = v_i,  1≤ i≤ d-1 w_d,   i=d Also denoteℳ_d : = span_k (w_1,⋯, w_d).For 1≤ i ≤ d-1, define an element ϕ_i ∈ (_d-1 ,  span_k (w_d) ) byϕ_i (v_j) = δ_ijw_d. Then ϕ_1,⋯, ϕ_d-1 is a basis of (_d-1 ,  span_k (w_d) ). By <ref> and Lemma <ref>, there is a Zariski open neighborhood of x_0 in Y, of the form𝒰: =(_d-1 , span_k (w_d) )×_ (_d,ℳ_d) . * Let R be a k-algebra. Let y ∈𝒰(R), corresponding to(ϕ,ψ) ∈ (_d-1 , span_k(w_d))⊗ R ⊕_ (_d, ℳ_d) ⊗ R.We view ϕ∈_R(_d-1⊗ R, span_R (w_d)) and ψ∈ _R (_d ⊗ R, ℳ_d ⊗ R). Then y is in ℐ if and only if ψ|__d-1⊗ R = ϕ. * The projection to the first factor 𝒰→ (_d-1 ,span_k (w_d)) restricts to an isomorphism 𝒰∩ℐ (_d-1 ,span_k (w_d)). (1) We know that y is in ℐ if and only if for all v∈_d-1⊗ R, there exists v'∈_d⊗ R, such thatv+ ϕ(v)= v'+ ψ(v')as elements of Ω⊗ R. Decompose v' = v'_1 + v'_2 with v'_1 ∈_d-1⊗ R and v'_2 ∈span_R (v_d). Then the above equation reads v-v'_1= v'_2 + (ψ(v') - ϕ(v)).Since v-v'_1 ∈_d-1⊗ R,  v_2' ∈span_R (v_d),  ψ(v')-ϕ(v) ∈ℳ_d ⊗ R, the above equation holds if and only if v= v_1' ,   v_2' = 0,  ϕ(v) = ψ(v). Hence y∈ℐ if and only if for all v ∈_d-1⊗ R we have ψ(v) = ϕ(v). This proves (1). (2) By (1), we know that 𝒰∩ℐ is the affine subspace of 𝒰 associated to the linear subspace of(_d-1 , span_k (w_d)) ×_ (_d, ℳ_d)consisting of pairs (ϕ,ψ) such that ψ|__d-1 = ϕ. Call this subspace A. We only need to show that projection to the first factor induces an isomorphism A(_d-1 ,span_k(w_d)). Note that if ψ∈_ (_d, ℳ_d), then ψ is determined by ψ|__d-1. This is because for each 1≤ i ≤ d, we haveψ v_d, v_i =- v_d, ψ v_i ,  i ≤ d-10,   i =dwhich means that ψ(v_d) is determined by ψ|__d-1. Conversely, given ϕ∈ (_d-1 , span_k (w_d)) we can construct ψ∈_ (_d, ℳ_d) such that ψ|__d-1 = ϕ as follows. For 1≤ j ≤ d-1, define ψ(v_j) to be ϕ(v_j). Define ψ(v_d) to be the unique element of ℳ_d satisfying (<ref>). In this way we have defined a linear map ψ: _d→ℳ_d such that ψ|__d-1 = ϕ. We now check that ψ is anti-symmetric. We need to check that for all 1≤ i ≤ j ≤ d, we have ψ v_j , v_i = - ψ v_i, v _j. If j = d, this is true by (<ref>). Suppose j<d. Then ψ v_j , v_i = ψ v_i, v_j = 0 because ψ v_j , ψ v_i ∈span_k (w_d) and w_d, _d-1 = 0. Thus ψ is indeed antisymmetric. It follows that A(_d-1, span_k (w_d)).§.§.§Now assume that x_0 ∈ S_Λ^g̅(k). Then g̅ stabilizes _d, Φ_d, _d-1. The natural action of g̅ on Y does not stabilize 𝒰 in general, but we have a natural identificationg̅·𝒰≅ (_d-1, span_k (g̅ w_d))×_ (_d, g̅ℳ_d)⊂ Y. Let R be a k-algebra. If y ∈𝒰(R) is given by (ϕ,ψ), where ϕ∈ (_d-1, span_k ( w_d))⊗ R and ψ∈_ (_d, ℳ_d ) ⊗ R, then g̅ (y ) ∈g̅·𝒰 corresponds under (<ref>) to(g̅∘ϕ∘ (g̅|__d-1)^-1, g̅∘ψ∘ (g̅|__d)^-1 ).Denote the last pair by (^g̅ϕ, ^g̅ψ). Then y = g̅ y if and only if the following two conditions hold.∀ u ∈_d-1, ∃ u' ∈_d-1⊗ R,  u + ϕ (u)= u' + ^g̅ϕ (u')∀ v ∈_d, ∃ v' ∈_d⊗ R,   v+ ψ (v) = v'+ ^g̅ψ(v')From now on we assume x_0 = (_d-1, _d)∈ S_Λ ^g̅ (k). Write the matrix over k of g̅ acting on Φ_d under the basis v̂_1,⋯, v̂_d (cf. <ref>) as [ H_1 H_2; H_3 H_4 ],where H_1 is of size (d-1) × (d-1), H_2 is of size (d-1) × 1, H_3 is of size 1× (d-1), and H_4∈ k.Since g̅ stabilizes _d-1, we have H_3=0 Let R be a k-algebra and let y =(ϕ,ψ) ∈𝒰(R). Represent ϕ as an R-linear combination ϕ = ∑_i=1 ^d-1 r_i ϕ_i of the ϕ_i's (cf. (<ref>)), where r_i ∈ R. Write r⃗ for the row vector (r_1,⋯, r_d-1). * View ϕ as an element of (Φ_d, d-1)(R). It is fixed by g̅|_Φ_d if and only if r⃗ ( H_1 + H_2 r⃗) = H_4 r⃗. r_i = h_dd r_i ∑_l=1^d-1 h^ld r_l + h_dd∑_ l =1^d-1 h^li r_l , 1≤ i ≤ d-1. * Assume that y ∈ℐ(R) and that ϕ∈ (Φ_d, d-1) is fixed by g̅|_Φ_d. Then ψ, viewed as an element of (d)(R), is fixed by g̅. In other words, y is fixed by g̅ in this case. (1) First we identify (Φ_d)⊗ R with R^d-1 using the basis v̂_1,⋯, v̂_d. As a point of (Φ_d, d-1), ϕ corresponds to the following submodule of (Φ_d)⊗ R: the image, i.e. column space, of the R-matrix [ I_d-1 0;r⃗ 0 ].Hence ϕ∈ (Φ_d, d-1) is fixed by g̅|_Φ_d if and only if the following two R-matrices have the same column space:A_1 : = [ I_d-1 0;r⃗ 0 ] A_2:= [ H_1 H_2; H_3 H_4 ][ I_d-1 0;r⃗ 0 ].Note that since [ H_1 H_2; H_3 H_4 ] is invertible, A_1 and A_2 have the same column space if and only if the column space of A_2 is contained in that of A_1. Since H_3 =0 (cf. Remark <ref>), we haveA_2=[ H_1 + H_2 r⃗0; H_4 r⃗0 ].But we easily see that the column space of [ H_1 + H_2 r⃗0; H_4 r⃗0 ] is contained in that of [ I_d-1 0;r⃗ 0 ] if and only if (<ref>) holds. (1) Condition (<ref>) is equivalent to the condition that for each 1≤ i ≤ d-1, there exist a_1,⋯, a_d-1∈ R such thatv_i + ϕ(v_i) = ∑_t=1 ^d-1 a_t ( v_t + ^g̅ϕ ( v_t)). We have ϕ(v_i) = r_iw_d.We compute, for 1≤ t ≤ d-1, ^g̅ϕ ( v_t) = g̅ϕg̅^-1 v_t = ∑_ l = 1^d-1g̅ϕ h^lt v_l = ∑ _l=1^d-1 h^ltg̅ r_l w_d = ∑_l=1^d-1 r_l h^lt∑ _m=1^d h_mdv̂_m .Hence the RHS of (<ref>) is equal to [ ∑_t=1 ^d-1 a_t ∑_ l =1^d-1 r_l h^lt h_dd] w_d + ∑_j=1^d-1[a_j +∑_t=1 ^d-1 a_t ∑_ l =1^d-1 r_l h^lt h_jd] v_j . Thus (<ref>) is equivalent to δ_ij = a_j +∑_t=1 ^d-1 a_t ∑_ l =1^d-1 r_l h^lt h_jd ,   1≤ j ≤ d-1 r_i = ∑_t=1 ^d-1 a_t ∑_ l =1^d-1 r_l h^lt h_dd.Note that h_dd∈ k^×, so we may substitute h_dd^-1 times the second equation into the first, and geta_j = δ_ij - h_dd^-1 r_i h_jd.Substituting these values of a_j back to the second equation we get the condition r_i = ∑_t=1 ^d-1 (h_ddδ_it - r_i h_td) ∑_ l =1^d-1 r_l h^lt, 1≤ i ≤ d-1, Conversely, if (<ref>) holds, the values (<ref>) of a_j satisfy (<ref>). Now, using ∑_t=1 ^d-1 h_td h^lt = δ _ld - h_dd h^ld =- h_dd h^ld for 1≤ l ≤ d-1 to simplify the RHS of (<ref>), we see that (<ref>) is equivalent to (<ref>). (2)Let (d-1,d) be the incidence subscheme of (d-1) ×(d). Consider the natural morphism f: ℐ→(d-1,d), (_d-1', _d') ↦ (_d-1', _d'). Note that 𝒰∩ℐ is connected because it is a linear subspaces of the affine spaces 𝒰 (cf. Lemma <ref>). Thus (g̅·𝒰) ∩ℐ = g̅ (𝒰∩ℐ) is also connected. Since 𝒰∩ℐ and (g̅·𝒰) ∩ℐ share a common k-point, namely x_0, we see that that f(𝒰∩ℐ) and f((g̅·𝒰)∩ℐ ) are in one connected component of (d-1, d). We have y ∈𝒰∩ℐ and g̅ y ∈ (g̅·𝒰)∩ℐ. In particular f(y) and f(g̅ y) are R-points of the aforementioned connected component of (d-1, d). Recall from <cit.> that (d-1, d) has two connected components, and each is isomorphic to (d-1) via the projection to the first factor. Our assumptions imply that f(y), f(g̅ y) have the same image in (d-1). It follows that f(y) = f(g̅ y). But by definition f is injective on R-points, so y = g̅ y.We first claim that under the assumptions (<ref>) is automatic for v∈_d-1. Indeed, for v ∈_d-1, by (<ref>) we can find v' ∈_d-1⊗ R such that v+ ϕ (v) = v' + ^g̅ϕ (v'). By Lemma <ref>, we have ψ|__d-1⊗ R = ϕ, and thereforev + ψ (v) = v+ ϕ(v) = v' + g̅ϕg̅^-1 v' g̅^-1 v' ∈_d-1⊗ R v' + g̅ψg̅^-1 v' = v' + ^g̅ψ(v'),thus (<ref>). It remains to study the condition <ref> for v = v_d.By Lemma <ref>, we have ψ v_j = r_j w_d,   1≤ j ≤ d-1ψ v_d = -∑_ j=1 ^d-1 r_j w_j.Now condition (<ref>) for v = v_d is equivalent to the existence of a_1,⋯, a_d ∈ R such that v_d - ∑_j=1 ^d-1 r_j w_j = ∑_k=1 ^d a_k ( v_k + ^g̅ψ (v_k)).Denote the left (resp. right) hand side of (<ref>) by ℒ (resp. ℛ).For 1≤ j ≤ d-1, we compute ℛ, v_j = ∑_k=1^d a_k g̅ψg̅^-1 v_k , v_j=∑_k=1^d a_k ψg̅^-1 v_k , g̅^-1 v_j = ∑_k=1^d a_k ψ∑_l=1^d g^lk v_l, ∑_m=1^d-1 g^mj v_m =∑_k=1^d a_kg^dkψ v_d , ∑_m=1^d-1 g^mj v_m = -∑_k=1^d a_kg^dk∑ _l=1^d-1 r_l w_l , ∑_m=1^d-1 g^mj v_m =-∑_k=1^d a_k g^dk∑ _l=1^d-1 r_l g^lj.Alsoℒ, v_j = -r_j.Using (<ref>), we have ℒ - ℛ, v_j = [∑_k=1^d a_k g^dk∑ _l=1^d-1 r_l g^lj]- r_j = [∑_k=1^d a_k g^dkh_dd^-1 (r_j + r_j r_d) ]- r_j.We compute ℛ, v_d =∑_k=1^d a_k ψ∑_l=1^d g^lk v_l, ∑_m=1^d g^md v_m = [∑_k=1^d a_k ψ∑_l=1^d g^lk v_l, ∑_m=1^d-1 g^md v_m] + [∑_k=1^d a_k ψ∑_l=1^d g^lk v_l,g^dd v_d] =[-∑_k=1^d a_k g^dk∑ _l=1^d-1 r_l g^ld] + [∑_k=1^d a_k ∑_l=1^d-1 g^lkr_l w_d,g^dd v_d] = [-∑_k=1^d a_k g^dk∑ _l=1^d-1 r_l g^ld] + [∑_k=1^d a_k g^dd∑_l=1^d-1 r_l g^lk] .Also ℒ, v_d = 0Using (<ref>), we have ℛ -ℒ, v_d = ℛ, v_d =[-∑_k=1^d-1 a_k g^dk∑ _l=1^d-1 r_l g^ld] - [ a_d g^dd∑ _l=1^d-1 r_l g^ld]+ [∑_k=1^d-1 a_k g^dd∑_l=1^d-1 r_l g^lk]+ [a_d g^dd∑_l=1^d-1 r_l g^ld]= [-∑_k=1^d-1 a_k g^dk∑ _l=1^d-1 r_l g^ld]+ [∑_k=1^d-1 a_k g^dd∑_l=1^d-1 r_l g^lk] (<ref>),   h_dd = g^dd[-∑_k=1^d-1 a_k g^dk∑ _l=1^d-1 r_l g^ld]+ [∑_k=1^d-1 a_k (r_k + r_k r_d) ] We compute, for 1≤ k ≤ d-1, ^g̅ψ (v_k)= ^g̅ϕ ( v_k) (<ref>)∑_l=1^d-1 r_l g^lk∑ _m=1^d h_mdv̂_m , and^g̅ψ (v_d) = g̅ψg̅^-1 v_d = ∑_l=1^d g^ldg̅ψ v_l = ∑_l=1^d-1r_l g^ld∑ _m=1^d h_mdv̂_m- g^dd∑_j=1^d-1 r_j (gw_j) .Hence the RHS of (<ref>) is equal to ∑_k=1 ^d a_k v_k + ∑ _k=1^d a_k∑_l=1^d-1r_l g^ld∑ _m=1^d h_mdv̂_m - g^dd∑_j=1^d-1 r_j (gw_j)= ∑_j=1^d-1[a_j + ∑ _k=1^d a_k∑_l=1^d-1r_l g^lk h_jd] v_j + [a_d v_d]+ [ ∑ _k=1^d a_k∑_l=1^d-1r_l g^lkh_dd] w_d - g^dd∑_j=1^d-1 r_j (gw_j).Therefore RHS of (<ref>) pairs with v_j (1≤ j ≤ d-1) to give - g^dd∑_j=1^d-1 r_j gw_j, v_j =- g^dd∑_j=1^d-1 r_j w_j, ∑_ l=1^d-1 g^lj v_l= - g^dd∑_j=1^d-1 r_j g^jj.On the other hand, the LHS of (<ref>) pairs with v_j (1≤ j ≤ d-1) to give r_j. Therefore (<ref>) implies that Assume x_0 ∈ S_Λ^g̅(k). Then the local ring ℛ = _ℐ^g̅, x_0 of ℐ ^g̅ at x_0, is isomorphic to the local ring at the origin of thesubscheme of ^d-1_k defined by the equations (<ref>), where _k^d-1 has coordinates r_1,⋯ , r_d-1. Moreover, explicitly we have ℛ≅ k[X]/ X^c. The first claim follows from Lemma <ref> and Proposition <ref>. To compute ℛ explicitly, we may and shall assume that the bases chosen in <ref> are such that the matrix H_1 is already in its (upper-triangular) Jordan normal form. Recall from Definition <ref> that all the Jordan blocks have distinct eigenvalues. Let J_d_1 (λ_1), ⋯, J_d_s-1 (λ_s-1) be the Jordan blocks that have eigenvalues different from λ. Let λ_s = λ and let J_d_s (λ_s) be the Jordan block of eigenvalue λ_s that appears in H_1, where we allow d_s =0. Then d_s =c-1. Moreover, we assume that J_d_1 (λ_1),⋯, J_d_s (λ_s) appear in the indicated order. Note that H_4 =λ. Write H_1 = (h_ij)_1≤ i,j ≤ d-1. The equations (<ref>) become r_i-1 h_i-1, i+ ( h_i,i- λ +r⃗ H_2) r_i = 0 ,  2≤ i ≤ d-1 ( h_1,1- λ +r⃗ H_2) r_1 = 0 Note that when h_i,i is not in the Jordan block J_d_s (λ_s), we have h_i,i -λ∈ k^×, so the element h_i,i - λ + r⃗ H_2 is a unit in the local ring _^d-1, 0. Hence for i ≤ d_1 + d_2 + ⋯ + d_s-1 = d-c, each r_i is solved to be a multiple of r_i-1 and this multiple eventually becomes zero when this procedure is iterated. In other words, the ideal in _^d-1, 0 defining ℛ is generated byr_1, r_2,⋯, r_d-c,(r⃗ H_2)r_d-c+1,(r⃗ H_2) r_i + r_i-1 (d-c+1 < i ≤ d-1). When c=1, we have ℛ≅ k as expected. Assume now c ≥ 2. Let h_1,⋯, h_c-1 be the last c-1 entries of the (d-1)× 1-matrix H_2. Make the change of variables X_i = r_d-c +i,   1≤ i ≤ c-1 ,A= r⃗ H_2. Then we have ℛ≅(k [X_1,⋯, X_c-1, A] / (A-∑_ i =1^c-1 h_i X_i,   AX_1, X_1 + AX_2,  X_2 +AX_3,⋯, X_c-2 +AX_c-1))_(X_1,⋯, X_c-1) By eliminating the variables X_1,…,X_c-2, we obtain thatℛ≅(k[X_c-1, A]/ (X_c-1 A^c-1,  A - X_c-1∑_i=0^c-2 h_c-1-i (-A)^i )) _(X_c-1, A). Note that if h_c-1 = 0, then the last two rows of the matrixλ I_d - [ H_1 H_2; 0 H_4 ]are both zero. This contradicts with the fact that the matrix [ H_1 H_2; 0 H_4 ], which represents g̅ on Φ_d, has in its Jordan normal form a unique Jordan block of eigenvalue λ (cf. <ref>). Hence h_c-1≠0, and ∑_i=0^c-2 h_c-1-i (-A)^i is a unit in k[X_c-1, A] _(X_c-1, A). It follows that ℛ≅(k[X]/ (X^c)) _(X) = k[X]/X^c,as desired. §.§ The intersection length formula We are now ready to determine the structure of the complete local ring of S_Λ^g̅ at a k-point of it, when p is large enough. It is a consequence of Lemma <ref>, Proposition <ref>, and some commutative algebra. Let x_0 ∈ S_Λ ^g̅(k). Let λ and c be as in Definition <ref>. Assume p>c. Then the complete local ring of S_Λ^g̅ at x_0 is isomorphic to k[X]/ X^c.Since S_Λ is smooth of dimension d-1 (cf. <ref>), the complete local ring of S_Λ^g̅ at x_0 is of the form𝒮̂= k[[X_1,⋯, X_d-1]]/Ifor a proper ideal I of k[[X_1,⋯, X_d-1]].[We use this notation because previously we used the notation 𝒮 to denote the local ring of S_Λ^g̅ at x_0.] Let 𝔪 be the maximal ideal of k [[X_1,⋯, X_d-1]] and let 𝔪̅ be the maximal ideal of 𝒮̂. By Lemma <ref> and Proposition <ref>, there is an isomorphismβ: 𝒮̂/𝔪̅^pk[X]/ X^c. We first notice that if R_1 is any quotient ring of k[[X_1,⋯, X_d-1]] with its maximal ideal 𝔪_1 satisfying 𝔪_1 = 𝔪_1^2 (i.e. R_1 has zero cotangent space), then R_1 =k. In fact, R_1 is noetherian and we have 𝔪_1^l = 𝔪_1 for all l∈_≥ 1, so by Krull's intersection theorem we conclude that 𝔪_1 = 0 and R_1 =k. Assume c=1. Then 𝒮̂/ 𝔪̅ ^p ≅ k, so 𝒮̂ has zero cotangent space and thus 𝒮̂= k. Next we treat the case c≥ 2. Let α be the compositeα: k [[ X_1,⋯, X_d-1]] →𝒮̂/𝔪̅^pk[X]/X^c.Let J = α. It suffices to prove that I= J. Note that because β is an isomorphism we haveI +𝔪^p = J. In the following we prove 𝔪^p ⊂ I, which will imply I= J and hence the theorem. The argument is a variant of <cit.>. Let Y ∈ k [[ X_1,⋯, X_d-1]] be such that α(Y) =X. Since X generates the maximal ideal in k[X]/X^c, we have𝔪 =J +(Y).Then by (<ref>) and (<ref>) we have 𝔪 = I + (Y) + 𝔪 ^p,and so the local ring k[[X_1,⋯, X_d-1]]/ (I+(Y)) has zero cotangent space. We have observed that the cotangent space being zero implies that the ring has to be k, or equivalently𝔪 = I + (Y) Now we start to show 𝔪^p ⊂ I. By (<ref>) we have 𝔪^p ⊂ I+ (Y^p), so we only need to prove Y^p∈I. We will show the stronger statement that Y^c ∈ I. By Krull's intersection theorem, it suffices to show that Y^c ∈ I + 𝔪 ^pl for all l≥ 1. In the rest we show this by induction on l. Assume l=1. Note that α (Y^c) =0, so by (<ref> ) we haveY^c ∈ J=I + 𝔪^p.Suppose Y^ c ∈I + 𝔪^pl for an integer l ≥ 1. Write Y^c = i +m,  i ∈ I,  m ∈𝔪^pl.By (<ref>) we know𝔪^pl⊂ ( J + (Y)) ^pl⊂∑ _s=0^pl J^s (Y) ^pl-s.Thus we can decompose m∈𝔪 ^pl into a summ = ∑_s=0 ^pl j_s Y^pl-s,  j_s ∈ J^s.By (<ref>) and (<ref>), we have Y^c = i +∑ _s=0^pl j_s Y ^pl-s . Splitting the summation ∑_ s = 0 ^pl into two sums ∑_s= 0 ^pl-c and ∑_ s = pl-c +1 ^pl and moving the sum ∑_s= 0 ^pl-c to the left hand side, we obtain Y^c - ∑ _s=0^pl-c j_s Y ^pl-s =i + ∑ _s=pl-c+1^pl j_s Y ^pl-s. DenoteA : = ∑_s =0^pl-c j_s Y^pl-s-c .Then the left hand side of (<ref>) is equal to (1-A) Y^c. Hence we have(1-A)Y^c = i + ∑_ s= pl-c+1 ^pl j_s Y^pl -s⊂ I + J^pl-c+1(<ref>) I+ (I+𝔪^p) ^pl-c+1 = I + 𝔪 ^p (pl-c+1)⊂ I + 𝔪 ^p(l+1),where for the last inclusion we have used c< p. Since 1-A is a unit in k[[X_1,⋯, X_d-1]] (because c<p), we have Y^c ∈ I + 𝔪^p(l+1). By induction, Y^c ∈ I + 𝔪^pl for all l∈_≥ 1, as desired. Let g ∈ J_b(_p) be regular semisimple and minuscule. Assume ^g ≠∅ and keep the notation of <ref>. Let x_0 ∈ (δ(^♭)∩^g)(k). Let (_d-1, _d) ∈ S_Λ (k) correspond to x_0 via Proposition <ref> and define λ, c as in Definition <ref>. Assume p>c. Then the complete local ring of δ(^♭)∩^g at x_0 is isomorphic to k[X]/ X^c. Moreover, we have c=m(Q(T))+1/2, where Q(T) as in Theorem <ref>. In particular, 1≤ c≤ n/2. The first part follows immediately from Proposition <ref> and Theorem <ref>. It remains to show thatc=m(Q(T))+1/2.Suppose x_0∈_Λ' for some vertex lattice Λ' (not necessarily equal to Λ = L(g)^∨). Let L be the associated special lattice. Then we have (<ref>)(Λ')_W^∨⊆ L⊆Λ'_W, (Λ')_W^∨⊆Φ(L)⊆Λ'_W.Hence the eigenvalue λ of g̅ on Φ(ℒ_d)/ℒ_d-1≅ (L+Φ(L))/L appears among the eigenvalues of g̅ on Λ' /(Λ')^∨, and so the minimal polynomial of g̅ on Λ' /(Λ')^∨ in 𝔽_p[T] is equal to Q(T) by the proof of Theorem <ref>. Notice that the characteristic polynomial of g̅ on Φ(ℒ_d) (in k[T]) dividesR(T)Q(T) (the characteristic polynomial of g̅ on Λ'_W/L(g)) and also is divided by R(T) (the characteristic polynomial of g̅ on (Λ')_W^∨/L(g)). It follows that c, the multiplicity of λ ofg̅ on Φ(ℒ_d), is equal to the multiplicity of λ in R(T)Q(T). The desired formula for c then follows sincem(Q(T))+1=2· the multiplicity ofQ(T) in R(T)Q(T).Finally, we note that m(Q(T)) is a positive odd integer not greater than the degree of P(T), and the latter, being the type of the vertex lattice Λ = L(g)^∨, is an even integer ≤ t_max (cf. <ref>). The bound for c follows from the value of t_max given in <ref>.§.§ Root computationHaving chosen the ordered basis e_1,⋯, e_d, f_1,⋯, f_d of Ω, we have a corresponding trivialized maximal torus 𝕋 of 𝔾 (i.e. a maximal torus together with an isomorphism to _m^d). Denote its natural characters byϵ_1,⋯, ϵ_d.Then all the roots of (𝕋, 𝔾) are given by α_i,j : = ϵ_i + ϵ_j,   1 ≤ i < j ≤ dβ_i,j : = ϵ_i - ϵ_j,   1 ≤ i < j ≤ d- α_i,j,  - β_i,j ,   1 ≤ i < j ≤ d.We denote byA_i,j :=(𝔾) _α_i,j 𝔄_i,j: =(𝔾) _-α_i,j B_i,j: =(𝔾) _β_i,j 𝔅_i,j : =(𝔾) _-β_i,j. We use the symbols A_i,j, etc. to denote a chosen basis vector of A_i,j, etc.We have [A_i,j, 𝔄_i,j ] ≡ [B_i,j, 𝔅_i,j] ≡ 0 𝕋.Now P^+ is the span of 𝕋∪A_i,j, B_i,j, 𝔅_i,j |  1≤ i<j ≤ d .P^- is the span of 𝕋∪A_i,j, B_i,j  |  1≤ i < j ≤ d ∪𝔅_i,j  |   1≤ i < j <d∪𝔄_i,d |  1≤ i < d.P_0 is the span of 𝕋∪A_i,j, B_i,j  |  1≤ i < j ≤ d ∪𝔅_i,j  |   1≤ i < j <d .(We see that indeed Φ switches P^+ and P^- and we have P^+ ∩ P^- =P_0.)We may identify 𝒯_x_0 S_Λ ^+ =P^-/ P_0 with the span of 𝔄_i,d |   1≤ i < d.From this we see that _k 𝒯_x_0 S_Λ ^+ is indeed d-1, as expected.Now we investigate the vector space 𝒯_x_0 (S_Λ ^+) ^g̅. Recall that this is the same as ( P^- /P_0)^ h_0 = X∈ P^- /P_0 | ( h _0) X ≡ XP_0.We first study the equation [ϖ , X] ≡ 0P_0for ϖ∈ P_0 and X ∈ P^- /P_0.Now suppose we have an element ϖ of P_0, of expansion ϖ = τ + ∑ _i< j <d( a_i,j A_i,j + b_i,j B_i,j + b̅_i,j𝔅_i,j) + ∑_i<d (a_i,d A_i,d + b_i,d B_i,d),where τ∈𝕋. Suppose we are also givenX = ∑ _1≤ i < d X^i 𝔄_i,d ∈span_k 𝔄_i,d  |   1≤ i <d =P^- /P_0.The following relations are automatic:[A_i,j , X]≡ 0   P_0,  ∀ 1≤ i< j ≤ d-1. [A_i,d , X]≡ [B_i,d , X] ≡ 0   P_0,  ∀ 1≤ i< d.Hence [ϖ , X]P_0 depends only on the component τ+ ∑_ i< j <d b_i,j B_i,j + b̅_i,j𝔅_i,jof ϖ. In the following i,j,i',j' are indices with 1≤ i< j <d, 1≤ i' <d, 1≤ j' <d. Write ∼ for colinearity. We have [𝔅_i,j,𝔄_j',d ] ∼δ _j,j'𝔄_i,d [B_i,j, 𝔄_i',d] ∼δ_i,i'𝔄_j,dWe may rescale the B_i,j's and 𝔅_i,j's to arrange[𝔅_i,j,𝔄_j,d ] = 𝔄_i,d[B_i,j, 𝔄_i,d] = 𝔄_j,d We have, modulo P_0,[ϖ , X] ≡ [ τ + ∑_ i< j <d b_i,j B_i,j + b̅_i,j𝔅_i,j ,  X]= [τ ,X] + ∑_ i< j <d b_i,j X^i𝔄_j,d+ ∑_ i< j <db̅_i,j X^j 𝔄_i,d =[τ, X] + ∑_1≤ l <d(∑_i<lb_i,l X^i + ∑ _ l<j<db̅_l,j X^j ) 𝔄_l,d.Also [τ,X] = ∑_1≤ l <d X^l· (- α_l,d )(τ) 𝔄_l,dWrite τ^l: = (-α _l,d) (τ). Hence [ϖ , X] ≡ 0P_0 if and only if τ^l X^l + ∑_i<lb_i,l X^i + ∑ _ l<j<db̅_l,j X^j = 0, ∀ 1≤ l <d.We view this as a system of d-1 linear equations in the unknownsX^1,⋯, X^d-1. When ϖ satisfies a suitable regularity condition, this system should have nullity 0 or 1 . hep
http://arxiv.org/abs/1702.07848v1
{ "authors": [ "Chao Li", "Yihang Zhu" ], "categories": [ "math.NT", "math.AG", "11G18, 14G17, secondary 22E55" ], "primary_category": "math.NT", "published": "20170225073657", "title": "Arithmetic intersection on GSpin Rapoport-Zink spaces" }
Network Structure and Naive Sequential LearningWe thank Daron Acemoglu, J. Aislinn Bohren, Jetlir Duraj, Ben Enke, Erik Eyster, Drew Fudenberg, Ben Golub, David Laibson, Jonathan Libgober, Margaret Meyer, Pooya Molavi, Xiaosheng Mu, Matthew Rabin, Ran Spiegler, Tomasz Strzalecki, Alireza Tahbaz-Salehi, Omer Tamuz, Linh T. T, Muhamet Yildiz, and three anonymous referees for useful comments. Krishna DasarathaHarvard University. Email:Kevin HeCalifornia Institute of Technology and University of Pennsylvania. Email:First version: January 17, 2017This version: February 29, 2020 ========================================================================================================================================================================================================================================================================================================================================================================================================We study a sequential-learning model featuring a network of naive agents with Gaussian information structures. Agents apply a heuristic rule to aggregate predecessors' actions. They weigh these actions according the strengths of their social connections to different predecessors. We show this rule arises endogenously when agents wrongly believe others act solely on private information and thus neglect redundancies among observations. We provide a simple linear formula expressing agents' actions in terms of network paths and use this formula to characterize the set of networks where naive agents eventually learn correctly. This characterization implies that, on all networks where later agents observe more than one neighbor, there exist disproportionately influential early agents who can cause herding on incorrect actions. Going beyond existing social-learning results, we compute the probability of such mislearning exactly. This allows us to compare likelihoods of incorrect herding, and hence expected welfare losses, across network structures. The probability of mislearning increases when link densities are higher and when networks are more integrated. In partially segregated networks, divergent early signals can lead to persistent disagreement between groups. empty =10000§ INTRODUCTION Consider an environment with a sequence of agents facing the same decision problem in turn, where each agent considers both her private information and the behavior of those who came before her in reaching a decision. When consumers choose between rival products, for instance, their decisions are often informed by the choices of early customers. When doctors decide on a treatment for their patients, they consult best practices established by other clinicians who came before them. Additionally, when a new theory or rumor is introduced into a society, individuals are swayed by the discussions and opinions of those who have already taken a clear stance on the new idea.A key feature of these examples is that agents observe only the behavior of a certain subset of their predecessors. For example, a consumer may know about her friends' recent purchases, but not the product choices of anyone outside her social circle. In general, each sequential social-learning problem has an observation network that determines which predecessors are observable to each agent. Observation networks associated with different learning problems may vary in density, extent of segregation, and other structural properties. Hence, our central research question: how does the structure of the observation network affect the probability of correct social learning in the long run?To answer this question, our model must capture key behavioral patterns in how individuals process social information in these learning settings. Empirical research on social learning suggests that humans often exhibit inferential naivet, failing to understand that their predecessors' actions reflect a combination of private information and the inference those predecessors have drawn from the behavior of still others (e.g., <cit.>). Returning to the examples, a consumer may mistake a herd on a product for evidence that everyone has positive private information about the product's quality. In an online community, a few early opinion-makers can make a rumor go viral, due to people not thinking through how the vast majority of a viral story's proponents are just following the herd and possess no private information about the rumor's veracity.The present study examines the effect of the observation network on the extent of learning, in a setting where players suffer from inferential naivet. We analyze the theoretical implications of a tractable log-linear learning rule that aggregates observations in a manner related to the DeGroot heuristic. Agents who fail to account for correlations in their observations (as in ) choose actions according to this log-linear rule. We also introduce weighted networks and a weighted version of log-linear behavior, where agents place different weights on different neighbors' actions. We show that such decision weights arise when agents additionally misperceive the precisions of predecessors' signals according to the strengths of their links to said predecessors.When combined with a Gaussian informational environment, our weighted log-linear learning rule lets us compute naive agents' exact probability of taking the correct action (“learning accuracy”) on arbitrary weighted networks. In contrast, the existing literature on social learning has focused on whether long-run learning outcome exhibits certain properties (e.g., convergence to dogmatic beliefs in the wrong state) with positive probability, but not on how these probabilities vary across environments. In settings where learning is imperfect (i.e., society does not almost surely learn the correct state in the long run), we obtain a richer characterization of the learning outcome and compute comparative statics of learning accuracy with respect to network structures.Under our weighted log-linear learning rule, actions take a simple form: given a continuous action space and a binary state space, we can express each agent's action as a log-linear function of her predecessors' private signal realizations with coefficients depending on the weighted network structure. We exploit this expression to develop a necessary and sufficient condition for society to learn completely: no agent has too much “influence.” Imperfect learning is the leading case: some agent in the network must have disproportionate influence whenever all but finitely many people observe more than one neighbor. Since this condition applies to a very broad class of networks, our analysis focuses on comparing differentially inefficient learning outcomes across networks. The detailed comparative statics we obtain from this approach are crucial: imperfect learning implies a wide range of welfare losses on different networks.Introducing naivet generates predictions matching empirical observations in several domains where the rational model's implications are unclear or counterfactual. We prove that increasing network density leads to more inaccurate social-learning outcomes in the long run. This prediction is supported by the experimental results in our companion paper <cit.>, where we find human subjects' accuracy gain from social learning is twice as large on sparse networks compared to dense networks. As another example, disagreement among different communities is common in practice. In the domain of product adoption, respective subcommunities frequently insist upon the superiority of their preferred products. We prove that if agents' actions only coarsely reflect their beliefs and society is partially segregated, then two social subgroups can disagree forever.[Segregation is extensively documented in social networks (seefor a survey).] Because of the limited information conveyed by actions, disagreement can persist even when agents observe the actions of unboundedly many individuals from another group. This presents a sharp contrast with Bayesian social-learning models (and other leading learning models such as DeGroot learning), where asymptotic agreement is a robust prediction. §.§ Related Literature §.§.§ Effect of Network Structure on Learning Much of the literature on how network structure matters for learning outcomes has focused on networked agents repeatedly guessing a state while learning from the same set of neighbors each period (e.g., <cit.>).The leading behavioral model here is the DeGroot heuristic, which forms a belief in each period by averaging the beliefs of neighbors in the previous period. A key prediction of DeGroot learning is that society converges to a long-run consensus <cit.>, which will be correct in large networks as long as the network is not too unbalanced <cit.>. Much of the analysis focuses on how network structure (e.g., homophily) matters for the speed of convergence to correct consensus beliefs <cit.>. We find that in a sequential setting, natural changes in network structure matter for asymptotic accuracy, not only for speed of learning. Changing network density, which has no effect on DeGroot learning in large networks, can substantially alter the probability that a society learns correctly. One intuition for this difference is that DeGroot agents assign weights adding up to 1 to their neighbors, but agents in our setting have increasing out-degrees with increasing network density and therefore can overweight their social information. Homophily also matters for this probability and even for whether consensus is ever reached.While DeGroot proposes the averaging rule as an ad hoc heuristic, several recent papers have developed behavioral microfoundations for learning in the repeated-interaction setting <cit.>. These models closely resemble ours at the level of individual behavior, but their predictions about society's long-run beliefs are more in line with DeGroot. As such, changes in network structure again have a limited scope for affecting learning outcomes in this literature.§.§.§ Sequential Social Learning We consider the same environment as the extensive literature on sequential social learning beginning with <cit.> and <cit.>. <cit.> and <cit.> characterize network features that lead to correct asymptotic learning for Bayesians who move sequentially. By providing a thorough understanding of rational learning in sequential settings, this literature provides a valuable benchmark as we study naive learning. We find that among network structures where Bayesian agents learn asymptotically, there is large variation in the probability of mislearning for naive agents.Several authors look at sequential behavioral learning on a particular network structure, usually the complete network <cit.>. We characterize several ways in which the choice of network structure matters for the distribution of long-run outcomes. <cit.> exhibit a general class of social-learning rules, which includes the weighted log-linear rule we study for certain values of the weights, where mislearning occurs with positive probability. We go beyond this general result by deriving expressions for the exact probabilities of mislearning on different networks, whose associated welfare losses cannot be compared using the binary classification of <cit.>.§ MODEL§.§ Sequential Social Learning on a Weighted Network There are two possible states of the world, ω∈{0,1}, both equally likely. There is an infinite sequence of agents indexed by i∈ℕ. Agents move in order, each acting once.On her turn, agent i observes a private signal s_i∈ℝ. Private signals (s_i) are Gaussian and independent and identically distributed(i.i.d.) given the state. When ω=1, we have s_i∼𝒩(1,σ^2) for some conditional variance σ^2>0. When ω=0, we have s_i∼𝒩(-1,σ^2).In addition to her private signal, agent i also observes the actions of previous agents. Then, i chooses an action a_i∈[0,1]. In our microfoundation for Definition <ref>, agent i chooses a_i to maximize the expectation of u_i(a_i,ω)-(a_i-ω)^2given her belief about ω, which we describe later. So her chosen action corresponds to the probability she assigns to the event {ω=1}.We find it convenient to work with the following change of variables.We have s̃_iln(ℙ[ω=1|s_i]/ℙ[ω=0|s_i]) and ã_iln(a_i/1-a_i)In words, s̃_i is the log-likelihood ratio of the events {ω=1} and {ω=0} given signal s_i, while it is easy to show that ã_i is the log-likelihood ratio of {ω=1} and {ω=0} corresponding to action a_i. That is to say, if a_i is optimal given i's beliefs, then ã_i is the log-likelihood ratio of {ω=1} and {ω=0} according to i's beliefs. Note that the transformations from s_i to s̃_i and from a_i to ã_i are bijective, so no information is lost when we relabel variables.Agents are linked to all of their predecessors on a weighted network, with a lower-triangular adjacency matrix M where all diagonal entries are equal to 0. For i>j, the weight of the link from i to j is given by M_i,j∈[0,1]. The weights of the edges determine the relative importance agents place on others' actions in forming their beliefs. In Section <ref>, we derive comparative statics with respect to the network structure. Because studying continuous changes in the network is more tractable than discrete changes, we consider a model that allows interior network weights.Throughout, we study naive agents who choose actions equal to a weighted sum of their observations according to the following log-linear updating rule. Agents use the weighted log-linear rule if each agent i plays ã_i=s̃_i+∑_j<iM_i,jã_j. In words, each agent i's log action is a weighted sum of her predecessors' log actions and her own log signal. The network M exogenously determines the relative influences of different predecessors' behavior on i's play, with j's influence proportional to the strength of i's social connection to her. By contrast, a society of rational agents would put endogenous weights on others' actions that are not simply proportional to strengths of the network links between them. For example, if agents only observe the actions of linked neighbors, rational agents would play the unique perfect Bayesian equilibrium of the social-learning game, in which case some equilibrium decision weights may be negative.[We omit the proof that actions are log linear at the perfect Bayesian equilibrium. This can be shown by induction, and the key step is a calculation similar to Lemma <ref>.]The formula in Definition <ref> resembles the DeGroot updating rule. A key distinction is that we allow for agents to have any out-degree, while the DeGroot heuristic requires all agents' weights sum to 1. In an unweighted network, any agent with multiple observations has an out-degree greater than 1. This distinction is not just a normalization, but is, in fact, the source of redundancy under naive inference.§.§ Microfoundation for Weighted Log-Linear Rule In this subsection, we provide a psychological microfoundation for the weighted log-linear rule from Definition <ref>. We first show that on unweighted networks (i.e., when each M_i,j is either 0 or 1), this rule follows from a primitive assumption about agents' inference.A growing body of recent evidence in psychology and economics shows that people learning from peers are often not fully correct in their treatment of social structure <cit.>. Instead of calculating the optimal Bayesian behavior that fully takes into account all they know about the network and signal structure, agents often apply heuristic simplifications to their environment. When networks are complicated and/or uncertain, determining Bayesian behavior can be intractable <cit.> and these heuristic learning rules become especially prevalent <cit.>. Motivated by this observation, we consider the following behavioral assumption.Each agent wrongly believes that each predecessor chooses an action to maximize her expected payoff based only on her private signal, and not on her observation of other agents.This inferential mistake can be equivalently described as agent i misperceiving M_j,k=0 for all k<j<i. Under this interpretation, i acts as if her neighbors do not take into account their own predecessors' actions.In the sequential-learning literature, Assumption <ref> was first studied on the complete network by <cit.>, who coined the term “best-response trailing naive inference” (BRTNI) to describe this behavior. The laboratory games in <cit.> and <cit.> find evidence for this behavioral assumption.Agents who make this inferential mistake use the log-linear rule on unweighted networks, providing a psychological microfoundation for the behavior we study.On an unweighted network where agents observe only the actions of linked predecessors, Assumption <ref> implies agents use the weighted log-linear rule.Due to the inferential mistake, agent i wrongly infers that j's log action equals her log signal. (This inference is possible since the continuum action set is rich enough to exactly reveal beliefs of predecessors.) The action a_i is the product of the relevant likelihoods because an agent satisfying Assumption <ref> thinks her observations are based on independent information, and therefore ã_i is the sum of the corresponding log-likelihood ratios.Inference under Assumption <ref> is cognitively simple in that it does not rely on agents' knowledge about the network (beyond their own neighborhoods) or even knowledge about the order in which their predecessors moved. Our model therefore applies even to complex environments with random arrival of agents. In such environments, Assumption <ref> may be more realistic than assuming full knowledge about the observation structure and move order.Next we give a microfoundation for the same behavior on weighted networks. We provide an interpretation of network weights that formalizes the idea that agents place more trust in neighbors with whom their connections are stronger: we suppose that agents underestimate the precision of others' private signals in a way that depends on M_i,j. Given network weight M_i,j∈[0,1], agent i believes j's private signal has conditional variance σ^2/M_i,j given the state.<cit.>'s meta-analysis of sequential social-learning experiments finds that laboratory subjects underuse social information relative to their own private signals. Our Assumption <ref> is consistent with this evidence, but also allows for different degrees of underuse for different predecessors. Weaker network connections formally correspond to predecessors whose signals are believed to be less informative about the state or less relevant. Conversely, if we know that i acts as if j's signal has conditional variance V_i,j≥σ^2, then we can construct a weighted network with weights M_i,j=σ^2/V_i,j.The next result shows the combination of the inferential mistake about others' social information and the underestimation of others' signal precisions (Assumptions <ref> and <ref>) provides a microfoundation for the weighted log-linear rule.Agents who satisfy Assumptions <ref> and <ref> use the weighted log-linear rule.By a property of the Gaussian distribution, a log-transformed Gaussian variable is also Gaussian, which is the key to showing the above lemma. §.§ Complete Learning and Mislearning We define what it means for society to learn completely in terms of convergence of actions.Society learns completely if (a_n) converges to ω in probability.Since a_n reflects agent n's belief in {ω=1} in our microfoundation of weighted log-linear inference, this definition also describes a property about the convergence of beliefs.[In general, we treat a_n as the belief of a naive agent who plays ã_n.] In a setting where society learns completely, agent n becomes very likely to believe strongly in the true state of the world as n grows large.One failure of complete learning is when society becomes fully convinced of the wrong state of the world with positive probability, an event we call mislearning.Society mislearns when lim_n→∞a_n=0 but ω=1 or when lim_n→∞a_n=1 but ω=0.Mislearning is not the only obstacle to complete learning. Consider a network where, for i≥3, we have M_i,1=M_i,2=1 and M_i,j=0 for all j1,2. Clearly this society neither learns completely nor mislearns with positive probability. Instead, agents' beliefs almost surely do not converge. § SOCIAL INFLUENCE AND LEARNING In this section, we develop a necessary and sufficient condition on the network for society to mislearn. We argue that this condition is satisfied by a large class of networks of economic relevance. §.§ Path-Counting Interpretation of Actions We now show that with naive agents, actions have a simple (log-)linear expression in terms of paths in the network. Unlike the expression in Definition <ref>, this next result expresses actions in terms of only signal realizations and the network structure, making no reference to predecessors' actions.Let M[n] refer to the n× n upper-left submatrix of M.Consider any weighted network M. For each n, the actions of the first n agents are determined by([ ã_1; ⋮; ã_n ])=(I-M[n])^-1·([ s̃_1;⋮; s̃_n ]). So, ã_i is a linear combination of (s̃_j)_j=1^i, with coefficients given by the number of weighted paths from i to j in the network with adjacency matrix M.From a combinatorial perspective, the formula says that the influence of j's signal on i's action depends on the number of weighted paths from i to j. In unweighted networks where all entries in M are 1 or 0, this is just the number of paths. In general, “weighted paths” means the path passing through agents i_0,...,i_K is counted with weight ∏_k=0^K-1M_i_k,i_k+1.Our Proposition <ref> resembles a formula for agents' actions in <cit.>. In a setting of repeated interaction with a fixed set of neighbors instead of sequential social learning, <cit.> also find that the influence of i's private information on j's period t posterior belief depends on the number of length-t paths from i to j in the network. §.§ Condition for Complete Learning We now use the representation result of Proposition <ref> to study which networks lead to mislearning by naive agents.We define below a notion of network influence for the sequential social-learning environment, which plays a central role in determining whether society learns completely in the long run.Let b_i,j(I-M[i])_i,j^-1 be the number of weighted paths from i to j in network M.Because of Proposition <ref>, these path counts are important to our analysis.For n>i, the influence of i on n is 𝕀(i→ n) b_n,i/∑_j=1^nb_n,j.That is to say, the influence of i on n is the fraction of paths from n that end at i.A different definition of influence appears in <cit.>, who study DeGroot learning in a network where agents act simultaneously each period. For them, the influence of an agent i is determined by the unit left eigenvector of the belief-updating matrix, which is proportional to i's degree in an undirected network with symmetric weights. Both definitions are related to the proportion of walks terminating at an agent, but because of the asymmetry between earlier and later agents in the sequential setting, the distribution of walks tends to be more unbalanced.For Proposition <ref> only, we consider networks that satisfy the following connectedness condition.Network M satisfies the connectedness condition if there is an integer N and constant C>0 such that for all i>N, there exists j<N with b_i,j≥ C.Intuitively, this says that all sufficiently late agents are indirectly influenced by some early agent. An unweighted network satisfies the connectedness condition if and only if there are only finitely many agents who have no neighbors. If such a network violates the connectedness condition, then clearly the infinitely many agents without neighbors will prevent society from learning completely. All weighted networks studied in Section <ref> also satisfy the connectedness condition.Consider any weighted network satisfying the connectedness condition. Society learns completely if and only if lim_n→∞𝕀(i→ n)=0 for all i.Proposition <ref> says that beliefs always converge to the truth if and only if no agent has undue influence in the network. This is a recurring insight in research on social learning on networks, beginning with the “royal family” example and related results in <cit.>. Other examples where excessive influence hinders social learning in networks include <cit.>, <cit.>, and <cit.>. The main contribution of Proposition <ref> is to identify the relevant measure of influence in our sequential-learning setting with naive agents. In our setting, unlike on large unordered networks as in <cit.>, the ordering of agents creates an asymmetry that prevents society from learning completely on most natural networks. Early movers influence many successors, an asymmetry unique to the sequential-learning setting. For instance, the results of <cit.> imply that when agents move simultaneously and every agent weights every other agent equally, society converges to complete learning as the size of the network grows. But as we show in Section <ref>, society does not learn completely in the uniform weighted network where each agent is connected to every predecessor with the same weight.The idea behind the proof is that if there were some i and ϵ>0 such that 𝕀(i→ n)>ϵ for infinitely many n, then i exerts at least ϵ influence on all these future players. Since s̃_i is unbounded, there is a rare but positive probability event where i gets such a strong but wrong private signal so that any future player who puts ϵ weight on s̃_i and (1-ϵ) weight on other signals would come to believe in the wrong state of the world with high probability. But this would mean infinitely many players have a high probability of believing in the wrong state of the world, so society fails to learn completely. To gain an intuition for the converse, first observe that ã_n=b⃗_n_1∑_i=1^n𝕀(i→ n)s̃_i. In the event that ω=1, the mean of ã_n converges to infinity with n. So, provided the variance of ã_n is small relative to its mean, ã_n will converge to infinity in probability and society will learn completely. Since the log signals (s̃_i) are i.i.d., the variance of ã_n is small relative to its mean precisely when all of the weights 𝕀(i→ n) in the summand are small; this is guaranteed by the condition on influence lim_n→∞𝕀(i→ n)=0.We now argue, both analytically and through an example, that the condition for complete learning in Proposition <ref> is violated by a large class of weighted networks. The out-degree of an agent i is ∑_j<iM_i,j, interpreted as the total number of neighbors who directly affect i's play. We first show that on any network where all but finitely many agents have out-degree at least 1+ϵ for some ϵ>0, complete learning fails.Suppose there exists ϵ>0 so that ∑_j<iM_i,j≥1+ϵ for all except finitely many agents i. Then society does not learn completely.The proof establishes that such a network satisfies the connectedness condition, but the influence of at least one of the early agents does not converge to 0, so complete learning fails by Proposition <ref>. The intuition is that if an influential later agent has an out-degree greater than 1, then the earlier agents whose action indirectly affects her must have even more influence. We construct a correspondence between weighted paths ending at early agents and weighted paths ending at later agents.The condition in Proposition <ref> is satisfied by all of the weighted networks studied in Section <ref>, which all feature mislearning with positive probability. The network in Remark <ref> also satisfies the condition in Proposition <ref> and almost surely leads to nonconvergence of beliefs.As an additional example, consider a network where network weights decay exponentially in distance, so M_i,j=δ^i-j for some δ≥0. When the rate of decay is strictly above the threshold of 1/2, late enough agents have out-degree bounded away from 1, so society does not learn completely by Proposition <ref>. When the rate of decay is strictly below the same threshold, we can show the connectedness condition fails and agents' beliefs do not converge to ω due to lack of information. At the threshold value of δ=1/2, private signals of all predecessors are given equal weight, so the law of large numbers implies complete learning. This highlights the fragility of complete learning in our model.Suppose M_i,j=δ^i-j for some δ≥0. Society learns completely if and only if δ=1/2.Details of the arguments are provided in the Appendix.§ PROBABILITY OF MISLEARNING AND NETWORK STRUCTURE In this section, we compare the probability of mislearning in networks where complete learning fails by Proposition <ref>. To do so, we first derive a formula for the probability of mislearning as a function of the observation network. Then, applying this expression to several canonical network structures, we compute comparative statics of this probability with respect to network parameters.The first network structure we consider assigns the same weight to each link. Next, we study a homophilic network structure with agents split into two groups, allowing different weights on links within groups and between groups. §.§ Probability of Mislearning Due to the Gaussian signal structure, we can give explicit expressions for the distributions of agent actions in each period. We show that the probability that agent n is correct about the state is related to the ratio of ℓ_1 norm to ℓ_2 norm of the vector of weighted path counts to n's predecessors, b⃗_n(b_n,1,...,b_n,n). The ratio b⃗_n_1/b⃗_n_2 can be viewed as a measure of distributional equality for the vector of weights b⃗_n.[In fact, the ratio of ℓ_1 to ℓ_2 norm has been used in the applied mathematics literature as a measure of normalized sparsity.] Indeed, among positive n-dimensional vectors b⃗_n with b⃗_n_1=1, the ℓ_1/ℓ_2 ratio is minimized by the vector b⃗_n=(1,0,...,0) and maximized by the vector b⃗_n=(1/n,1/n,...,1/n).We can express in terms of the network structure the ex ante probability that agent n puts more confidence in the state being ω=1 when this is, in fact, true. This gives the key result that later lets us compare the probabilities of mislearning on different networks. On any network, the probability that agent n thinks the correct state is more likely than the incorrect one is ℙ[ã_n>0|ω=1]=Φ(1/σ·∥b⃗_n∥_1/∥b⃗_n∥_2),where Φ is the standard Gaussian distribution function.As ∥b⃗_n∥_1/∥b⃗_n∥_2 increases, the probability of agent n playing higher actions in state ω=1 also increases. In other words, the agent is more likely to be correct about the state when the vector of path counts is more evenly distributed. This should make intuitive sense as she is more likely to be correct when her action is the average of many independent signals with roughly equal weights, and less likely to be correct when her action puts disproportionally heavy weights on a few signals.The proof of Proposition <ref> first expresses ã_n=∑_i=1^nb_n,is̃_i using Proposition <ref> and then observes that (s_i) are distributed i.i.d. 𝒩(1,σ^2) conditional on ω=1. This means (s̃_i) are also conditionally i.i.d. Gaussian random variables, since the proof of Lemma <ref> establishes that s̃_i=2s_i/σ^2. As a sum of conditionally i.i.d. Gaussian random variables, the action ã_n is itself Gaussian. The result follows from calculating the mean and variance of this sum.For the remainder of this section, we study specific weighted networks where the ratio b⃗_n_1/b⃗_n_2 can be expressed in terms of interpretable network parameters. Our basic technique is to count paths on a given network using an appropriate recurrence relation, and then to apply Proposition <ref>. This allows us to relate network parameters to the probability distribution over learning outcomes. §.§ Uniform Weights The simplest network we consider assigns the same weight q∈[0,1] to each feasible link. By varying the value of q, we can ask how link density affects the probability of mislearning, which we now define.Consider the q-uniform weighted network. When 0<q≤1, almost surely agents' actions a_n converge to 0 or 1. The probability that society mislearns is Φ(-1/σ·√(q+2/q)).This probability is strictly increasing in q.The first statement of the proposition tells us that agents eventually agree on the state of the world, and that these beliefs are arbitrarily strong after some time. These consensus beliefs need not be correct, however. The probability of society converging to incorrect beliefs is nonzero for all positive q, and increases in q. When the observational network is more densely connected, society is more likely to be wrong, as in Figure <ref>.When the observation network is sparse (i.e., q is low), early agents' actions convey a large amount of independent information because they do not influence each other too much. This facilitates later agents' learning. For high q, early agents' actions are highly correlated, so later naive agents cannot recover the true state as easily. A related intuition compares agents' beliefs about network structure to the actual network: as q grows, agents' beliefs about the network weights chosen by their neighbors differ more and more from the true weights. For small q, however, underweighting of social information partially mitigates the error due to Assumption <ref>. To complement this theoretical result, in a companion paper we conduct a sequential-learning experiment to evaluate a related comparative static <cit.>. In line with the intuition above, we find that human subjects indeed exhibit lower long-run accuracy in the learning game when the density of the observation network increases.The proof relies on the recurrence relation b_n,i=(1+q)b_n-1,i. To see that this recurrence holds, let Ψ[n→ i] be the set of all paths from n to i and let Ψ[(n-1)→ i] be the set of all paths from n-1 to i. For each ψ∈Ψ[(n-1)→ i] passing through agents (n-1),j_1,j_2,...,i, we associate two paths ψ^',ψ^”∈Ψ[n→ i], with ψ^' passing through n,j_1,j_2,...i and ψ^” passing through n,(n-1),j_1,j_2,...,i. This association exhaustively enumerates all paths in Ψ[n→ i] as we consider all ψ∈Ψ[(n-1)→ i]. Path ψ^' has the same weight as ψ since they have the same length, while path ψ^” has q fraction of the weight of ψ since it is longer by 1. This shows that the weight of all paths in Ψ[n→ i] is equal to 1+q times the weight of all paths in Ψ[(n-1)→ i]; hence, b_n,i=(1+q)b_n-1,i.The case q=1 is studied in <cit.>, who use a slightly different signal structure. In their setting, <cit.> show that agents' beliefs converge to 0 or 1 almost surely and derive a nonzero lower bound on the probability of converging to the incorrect belief. By contrast, our result gives the exact probability of converging to the wrong belief for any 0<q≤1, under a Gaussian signal structure.There is a discontinuity at q=0. As q approaches 0, the probability of society eventually learning correctly approaches 1. But when q=0, each agent uses only her own private signal, so there is no social learning. This nonconvergence of actions means that society never learns correctly. While we have focused on long-run learning accuracy, there is a trade-off between the speed of convergence and asymptotic accuracy for naive agents. The next proposition illustrates an extreme form of this trade-off. Start with a uniform-weights network with any link weight 0<q^*≤1. Sufficiently sparse uniform-weights networks will have worse accuracy than the q^*-uniform network for arbitrarily many early agents due to a lack of information aggregation. However, as implied by Proposition <ref>, late enough agents will have higher accuracy on these very sparse networks than on the q^*-uniform network.For any 0<q^*≤1 and N∈ℕ, there exists some q̅∈(0,q^*) so that ℙ[ã_n>0|ω=1] is strictly larger on the q^*-uniform weights network than the q-uniform weights network for all 2≤ n≤ N and q∈(0,q̅).§.§ Two Groups We next consider a network with two groups and different weights for links within groups and between groups. By varying the link weights, we will consider how homophily (i.e., segregation in communication) changes learning outcomes.Odd-numbered agents are in one group and even-numbered agents are in a second group. Each feasible within-group link has weight q_s (s for same) and each feasible between-group link has weight q_d (d for different), so that for i>j, the link M_i,j=q_s if i≡ j (mod 2) and M_i,j=q_d otherwise. Figure <ref> illustrates the first four agents in a two-group network.We denote the probability of mislearning with weights q_s and q_d as ξ(q_s,q_d).Consider the two-groups network with within-group link weight q_s and across-group link weight q_d. When 0≤ q_s≤1 and 0<q_d≤1, almost surely agents' actions a_n converge to 0 or 1. The partial derivatives of the mislearning probability ξ(q_s,q_d) satisfy ∂ξ/∂ q_d>∂ξ/∂ q_s>0, i.e., the probability is increasing in q_s and q_d, but increasing q_d has a larger effect than increasing q_s.The first statement again says that agents eventually agree on the state and eventually have arbitrarily strong beliefs. The fact that ξ is increasing in q_s and q_d is another example of higher link density implying more mislearning. The comparison ∂ξ/∂ q_d>∂ξ/∂ q_s tells us that more integrated (i.e., less homophilic) networks are more likely to herd on the wrong state of the world.Convergence of beliefs is more subtle with two groups, as we might imagine the two homophilic groups holding different beliefs asymptotically. This does not happen because agents have continuous actions that allow them to precisely convey the strength of their beliefs. As such, eventually one group will develop sufficiently strong beliefs to convince the other given any arbitrarily weak connection q_d>0 between groups. (In Section <ref>, however, we show that disagreement between two homophilic groups is possible with a coarser action space.)To see that convergence must occur, observe that the belief of a later agent n depends mostly on the number of paths from that agent to early agents (and those agents' signal realizations). When n is large, most paths from agent n to an early agent pass between the two groups many times. So the number of paths does not depend substantially on agent n's group. Put another way, when q_s≫ q_d>0 and n is large, agent n has many more length-1 paths to her own group than to the other group, but roughly the same total number of paths across all lengths to both groups. Therefore, agent n's belief does not depend substantially on whether n is in the odd group or the even group.[Each path transitions between the two groups, and eventually the probability of ending in a given group is approximately independent of the starting group. This is analogous to a Markov chain approaching its stationary distribution.]<cit.>'s homophily index equals (q_s-q_d)/(q_s+q_d) for this weighted network. To explore how homophily affects mislearning probability while holding fixed the average degree of each agent, we consider the total derivative d/dΔξ(q_s+Δ,q_d-Δ). To interpret, we are considering the marginal effect on mislearning of a Δ increase to all the within-group link weights, coupled with a Δ decrease to all the between-groups link weights. These two perturbations, applied simultaneously, leave each agent with roughly the same total degree and increases the homophily index by (2Δ)/(q_s+q_d). Using the chain rule and Proposition <ref>,d/dΔξ(q_s+Δ,q_d-Δ)=∂ξ/∂ q_s-∂ξ/∂ q_d<0,which means increasing the homophily index of the society and fixing average degrees always decreases the probability of mislearning. Note that this result holds regardless of whether society is currently homophilic (q_s>q_d) or heterophilic (q_s<q_d).An important insight from the literature about social learning on networks is that beliefs converge more slowly on more segregated networks <cit.>. In our model, faster convergence of beliefs tends to imply a higher probability of incorrect beliefs. When beliefs converge quickly, agents are putting far too much weight on early movers, while when beliefs converge more slowly agents wait for more independent information. Since agents eventually agree, segregation helps society form strong beliefs more gradually.§ DISAGREEMENT In Section <ref>, we saw that even on partially segregated networks, agents eventually reach a consensus on the state of the world. This agreement relies crucially on the richness of the action space available to agents, which allows agents to communicate the strength of their beliefs. In this section, we modify our model so that the action space is binary and show that the two groups can disagree forever about the state of the world even when the number of connections across the groups is unbounded.The contrasting results for the binary-actions model versus the continuum-actions model echo a similar contrast in the rational-herding literature, where society herds on the wrong action with positive probability when actions coarsely reflect beliefs <cit.>, but almost surely converges to the correct action when the action set is rich enough <cit.>. Interestingly, while the rational-herding literature finds that an unboundedly informative signal structure prevents herding on the wrong action even when actions coarsely reflect beliefs <cit.>, we will show below that even with Gaussian signals two groups may disagree with positive probability.Suppose that the state of the world and the signal structure are the same as in Section <ref>, but agents now choose binary actions a_i∈{ 0,1}. Agents still maximize the expectation of u_i(a_i,ω)-(a_i-ω)^2 given their beliefs about ω, under the psychological errors given by Assumptions <ref> and <ref>. This utility function now implies that an agent chooses the action corresponding to the state of world she believes is more likely. Agents live on the two-groups network from Section <ref>: for i>j, the link M_i,j=q_s if i≡ j (mod 2) and M_i,j=q_d otherwise. We assume q_s>q_d>0, so that agents have stronger connections with predecessors from their own groups. Consider the two-groups network. Suppose q_s>q_d>0 and agents play binary actions. Then there is a positive probability that all odd-numbered agents choose action 0 while all even-numbered agents choose action 1.Persistent disagreement is sustained even though agent n has approximately nq_d/2 weighted links to agents from the other group (when n is large) taking opposite actions.Our result extends to two groups of unequal sizes as long as for all later agents, the total number of weighted links to their own group is larger than the total number of weighted links to the other group.We also get the same result on a random-network analog of the two-groups model, where edges are unweighted and q_s is the probability of link formation within groups while q_d is the probability of link formation between groups. Agents observe only the actions of the predecessors to whom they are linked and wrongly believe all observed actions derive from private signals.[Details of the statement and a proof are available in a previous draft at <https://arxiv.org/pdf/1703.02105v5.pdf>.] By contrast, with rational agents, Theorem 2 of <cit.> implies the groups almost surely agree asymptotically on this random network.This result adds a new mechanism to the literature on disagreement in connected societies <cit.>. <cit.> also study disagreement in a binary sequential-learning setting with behavioral agents, but their results concern disagreement on a complete network among agents with different types of behavioral biases. By contrast, our Proposition <ref> says that when all agents use the same naive heuristic, they can still disagree by virtue of belonging to two different homophilic social groups, even when there are many connections between those groups.§ CONCLUSION In this paper, we have explored the influence of network structures on learning outcomes when agents move sequentially and use a log-linear learning rule due to inferential naivet. We have compared long-run welfare across networks by deriving the exact probabilities of mislearning on arbitrary networks.We have studied the simplest possible social-learning environment to focus on the effect of network structure, but several extensions are straightforward. Analogs of our general results hold for finite state spaces with more than two elements, where we can define a log-likelihood ratio for each pair of states. We can also make the order of moves random and unknown, in which case naive behavior conditional on a given turn order is the same as when that order is certain.We prove our comparative statics results for weighted networks as they are analytically more tractable than random graphs. For each weighted network with weights in [0,1], there corresponds an analogous (unweighted) random graph model where the i→ j link exists with probability M_i,j. In numerical simulations, all comparative statics results proved for weighted networks in Section <ref> continue to hold in the analogous random network models. The major obstacle to extending our proofs is that because our networks are directed and acyclic, the relevant adjacency matrices have no nonzero eigenvalues. As a consequence, most techniques from spectral random graph theory do not apply (but perhaps other methods would).Appendix§ PROOFS §.§ Proof of Lemma <ref> The log-likelihood ratio of state ω=1 and state ω=0 conditional on the signal realizations of i's linked predecessors is: ln(ℙ[ω=1|s_i,(s_j)_j:M_i,j=1]/ℙ[ω=0|s_i,(s_j)_j:M_i,j=1]) =ln(ℙ[s_i,(s_j)_j:M_i,j=1|ω=1]/ℙ[s_i,(s_j)_j:M_i,j=1|ω=0]) (two states equally likely) =ln(ℙ[s_i|ω=1]/ℙ[s_i|ω=0]·∏_j:M_i,j=1ℙ[s_j|ω=1]/ℙ[s_j|ω=0]) (by independence) =ln(ℙ[ω=1|s_i]/ℙ[ω=0|s_i])+∑_j:M_i,j=1ln(ℙ[ω=1|s_j]/ℙ[ω=0|s_j]) =s̃_i+∑_j:M_i,j=1s̃_j =s̃_i+∑_j<iM_i,js̃_j Due to Assumption <ref>, i thinks each predecessor j must have received signal s_j such that s̃_j=ã_j. When i observes only the play of linked predecessors, her log-likelihood ratio of state ω=1 and state ω=0 given her social observations and private signal is therefore s̃_i+∑_j<iM_i,jã_j. She maximizes her expected payoff by choosing an action a_i corresponding to her belief in state ω=1, which implies that ã_i is equal to this log-likelihood ratio.§.§ Proof of Lemma <ref>We first establish an auxiliary lemma. We have s̃_i=2s_i/σ^2. The log-likelihood ratio is ln(ℙ[ω=1|s_i]/ℙ[ω=0|s_i]) =ln(ℙ[s_i|ω=1]/ℙ[s_i|ω=0])=ln(exp(-(s_i-1)^2/2σ^2)/exp(-(s_i+1)^2/2σ^2)) =-(s_i^2-2s_i+1)+(s_i^2+2s_i+1)/2σ^2=2s_i/σ^2. We now turn to the proof of Lemma <ref>.Due to Assumptions <ref>, i thinks that j will choose a_j such that ã_j=2s_j/σ^2 by the result just established, since j thinks the conditional variance of her signal is σ^2. But, since i believes j's signal has conditional variance σ^2/M_i,j by Assumption <ref>, in i's view ln(ℙ[ω=1|s_j]/ℙ[ω=0|s_j])=2s_j/σ^2/M_i,j=M_i,jã_j,again applying the result above.Omitting analogous algebraic arguments as in the proof of Lemma <ref>,ln(ℙ[ω=1|s_i,(s_j)_j<i]/ℙ[ω=0|s_i,(s_j)_j<i]) =ln(ℙ[ω=1|s_i]/ℙ[ω=0|s_i])+∑_j<iln(ℙ[ω=1|s_j]/ℙ[ω=0|s_j]) =s̃_i+∑_j<iM_i,jã_j.So s̃_i+∑_j<iM_i,jã_j is i's log-likelihood ratio of state ω=1 and state ω=0 given her social observations and private signal.§.§ Proof of Proposition <ref> By weighted log-linear inference, for each i we have ã_i=s̃_i+∑_j∈ N_iM_i,jã_j. In vector notation, we therefore have ([ ã_1; ⋮; ã_n ])=([ s̃_1;⋮; s̃_n ])+M[n]·([ ã_1; ⋮; ã_n ]) Algebra then yields the desired expression. Note that (I-M[n]) is invertible because M[n] is lower triangular with all diagonal entries equal to 0.To see the path-counting interpretation, write (I-M[n])^-1=∑_k=0^∞M[n]^k. Here, (M[n]^k)_i,j counts the number of weighted paths of length k from i to j.§.§ Proof of Proposition <ref> Without loss of generality, assume ω=1. (The case of ω=0 is exactly analogous and is omitted.) Note that a_n converges in probability to 1 if and only if ã_n converges in probability to ∞.First suppose that lim_n→∞𝕀(j→ n)≠0 for some j. Then there exists ϵ>0 such that 𝕀(j→ n)>ϵ for infinitely many n. For each such n, the probability that agent n chooses an action with ã_n<0 is equal to the probability that ∑_i=1^n𝕀(i→ n)s̃_i is negative.Because s is Gaussian, s̃ has finite variance, so we can find positive constants C and δ independent of n such that ∑_i≠ j𝕀(i→ n)s̃_i<C with probability at least δ (for example, by applying Markov's inequality to |s̃_i|). Then agent n will be wrong if s̃_j<-C/ϵ, which is a positive probability event since s̃ is unbounded. So the probability that an agent n such that 𝕀(j→ n)>ϵ chooses ã_n<0 is bounded from below by a positive constant.For the converse, suppose that lim_n→∞𝕀(i→ n)=0 for all i. By the independence of the log signals s̃_i, the log action ã_n=∑_i=1^n𝕀(i→ n)s̃_i is a random variable with mean b⃗_n_1 and standard deviation b⃗_n_2σ when ω=1. We now use the connectedness condition to show that b⃗_n_1/b⃗_n_2→∞.Find N and C≤1 as in the connectedness condition. For each ϵ>0, we can choose M_ϵ such that 𝕀(i→ n)<ϵ whenever i<N and n>M_ϵ by the hypothesis lim_n→∞𝕀(i→ n)=0 applied to the finitely many members i<N. But now for any j≥ N and any n>max(j,M_ϵ), concatenating a path from j to n with a path from i to j gives a path from i to n whose weight is the product of the weights of the two subpaths. This shows b_n,i≥ b_j,i· b_n,j, which implies 𝕀(i→ n)≥𝕀(j→ n)· b_j,i. We have 𝕀(j→ n)≤min_i<N𝕀(i→ n)/b_j,i, where b_j,i≥ C for at least one i<N by the connectedness condition. This shows for any j∈ℕ and for n>M_ϵ, that we get 𝕀(j→ n)≤ϵ/C.We have for all n>M_ϵ, b⃗_n_2/b⃗_n_1≤max_j√(b⃗_n_1· b_n,j)/b⃗_n_1=max_j<n√(𝕀(j→ n))<√(ϵ/C). Because ϵ>0 is arbitrary, b⃗_n_1/b⃗_n_2 converges to infinity.Let some K>0 be given. We now show that ℙ[ã_n<K|ω=1]→0, hence proving that ã_n converges to ∞ in probability. We compute z_n:=𝔼[ã_n|ω=1]-K/Std[ã_n|ω=1]=b⃗_n_1·2/σ^2/b⃗_n_2·2/σ-K/b⃗_n_2·2/σ. Since b⃗_n_1/b⃗_n_2→∞, the first term converges to infinity. By the connectedness condition, b⃗_n_2≥ C for all large enough n, so the second term is bounded. This implies z_n→∞. By Chebyshev's inequality, ℙ[ã_n<K|ω=1]≤ z_n^-2. This shows ℙ[ã_n<K|ω=1]→0.We note that this shows convergence in probability, but does not characterize the joint distribution of actions, so these methods do not guarantee almost sure convergence (without further structure on the networks as in Section <ref>).§.§ Proof of Proposition <ref> By the hypothesis of the proposition, there exists some ϵ>0 and N∈ℕ so that for all i>N, ∑_j<iM_i,j≥1+ϵ. Modify the network to set all links originating from any of the first N agents to have weight 0, that is, M_i,j=0 for all i,j≤ N.We prove by induction that ∑_j≤ Nb_i,j≥1+ϵ for all i≥ N+1 on the modified network. Consider agent N+1. Since ∑_j<N+1M_N+1,j≥1+ϵ and all of (N+1)'s out-degree comes from links to agents in position N or earlier, ∑_j≤ Nb_N+1,j≥1+ϵ. By induction, suppose ∑_j≤ Nb_N+k,j≥1+ϵ holds for all 1≤ k≤ K. A lower bound on ∑_j≤ Nb_N+K+1,j is ∑_j≤ NM_N+K+1,j+∑_N+1≤ i≤ N+KM_N+K+1,i·(∑_j^'≤ Nb_i,j')≥∑_j≤ NM_N+K+1,j+∑_N+1≤ i≤ N+KM_N+K+1,i·(1+ϵ)≥∑_j≤ N+KM_N+K+1,j≥1+ϵ,where in the first inequality we used the inductive hypothesis and in the last inequality we used the fact that N+K+1 has an out-degree of at least 1+ϵ. This establishes ∑_j≤ Nb_N+K+1,j≥1+ϵ and so by induction, ∑_j≤ Nb_N+k,j≥1+ϵ for all k≥1.This result holds a fortiori on the original network with higher link weights. By the pigeonhole principle, the original network satisfies the connectedness condition with C=1/N(1+ϵ).Now return to the modified network (so M refers to the possibly modified network weights). We develop some notation for the rest of the proof and establish an intermediary lemma. For j<i, let Ψ[i→ j] be the set of all paths from i to j. Let Ψ̂[i→[N]] be the set of paths from i to some agent k≤ N, such that the path contains no links between two different agents among the first N. Let Ψ̂[i→[N]| j] be the subset of such paths that pass through j.For a path ψ passing through agents i_1,i_2,...,i_L, let W(ψ):=∏_ℓ=1^L-1M_i_ℓ+1,i_ℓ denote its weight and let D(ψ):=∏_ℓ=1^L-1(∑_j<i_ℓM_i_ℓ,j) denote the product of out-degrees of all agents on the path except the last one. For n>N, ∑_ψ∈Ψ̂[n→[N]]W(ψ)/D(ψ)=1. We prove by induction on n. For n=N+1, the set Ψ̂[n→[N]] is the set of N paths each consisting of a link from N+1 to some agent j≤ N. Each ψ∈Ψ̂[n→[N]] therefore has D(ψ)=∑_j<N+1M_N+1,j, and the path terminating at j has W(ψ)=M_N+1,j. So the claim holds for n=N+1. By induction suppose it holds for all n≤ N+K for some K≥1. For n=N+K+1, partition Ψ̂[n→[N]] into K+1 groups. For 1≤ k≤ K, each path ψ∈Ψ(k) in the kth group consists of the link n→(N+k) concatenated in front of a path ψ^'∈Ψ̂[(N+k)→[N]], so ψ=((n,N+k),ψ^'). The final (K+1)th group consists of paths where n links directly to an agent among the first N. We have: ∑_ψ∈Ψ̂[n→[N]]W(ψ)/D(ψ)=∑_k=1^K(∑_ψ^'∈Ψ̂[(N+k)→[N]]W((n,N+k),ψ^')/D((n,N+k),ψ'))+∑_j=1^NM_n,j/∑_h<nM_n,h =∑_k=1^K(∑_ψ^'∈Ψ̂[(N+k)→[N]]M_n,N+k· W(ψ^')/∑_h<nM_n,h· D(ψ'))+∑_j=1^NM_n,j/∑_h<nM_n,h =∑_k=1^K(M_n,N+k/∑_h<nM_n,h·1)+∑_j=1^NM_n,j/∑_h<nM_n,h (by inductive hypothesis) =∑_h<nM_n,h/∑_h<nM_n,h=1. So by induction, this claim holds for all n>N.We now return to the proof of Proposition <ref>. For N<i<n, b_n,i=∑_ψ∈Ψ[n→ i]W(ψ)=∑_ψ∈Ψ[n→ i][W(ψ)·(∑_ψ̂∈Ψ̂[i→[N]]W(ψ̂)/D(ψ̂))], where the second equality follows because Lemma <ref> implies the term in the inner parentheses is 1. For a path ψ passing through i, let ψ[i] denote the subpath starting with i. So the above says b_n,i=∑_ψ∈Ψ̂[n→[N]| i]W(ψ)/D(ψ[i]). Summing across i, we may re-index the sum by paths in Ψ̂[n→[N]]. To be more precise, for ψ∈Ψ̂[n→[N]], write A(ψ)⊆{N+1,...,n-1} to be the set of agents that ψ passes through. For each j∈ A(ψ), we have ψ∈Ψ̂[n→[N]| j], so it contributes W(ψ)/D(ψ[j]) to the overall sum, ∑_i=N+1^n-1b_n,i=∑_i=N+1^n-1∑_ψ∈Ψ̂[n→[N]| i]W(ψ)/D(ψ[i]) =∑_ψ∈Ψ̂[n→[N]]∑_j∈ A(ψ)W(ψ)/D(ψ[j])≤∑_ψ∈Ψ̂[n→[N]]W(ψ)·∑_j∈ A(ψ)1/(1+ϵ)^|ψ[j]|-1≤∑_ψ∈Ψ̂[n→[N]]W(ψ)·1/ϵ where |ψ[j]| denotes the number of agents in the subpath ψ[j]. On the third line, we used the fact that all agents on ψ except the last one must have out-degree at least 1+ϵ, so D(ψ[j])≥(1+ϵ)^|ψ[j]|-1. The result ∑_i=N+1^n-1b_n,i≤∑_ψ∈Ψ̂[n→[N]]W(ψ)·1/ϵ also holds for the original network, since we have not modified the subnetwork among agents N+1,...,n.We also have ∑_i=1^Nb_n,i=∑_ψ∈Ψ̂[n→[N]]W(ψ). On the original network, we have higher link weights among the first N agents, so we in fact have ∑_i=1^Nb_n,i≥∑_ψ∈Ψ̂[n→[N]]W(ψ).So, on the original network, ∑_i=1^N𝕀(i→ n)≥1/1+1/ϵ.This inequality holds for every n, so it cannot be the case that lim_n→∞𝕀(i→ n)=0 for all 1≤ i≤ N.§.§ Proof of Example <ref> The coefficients b_i,n satisfy the recurrence relation b_n,i=2δ b_n-1,i whenever n-i>1.When δ=1/2, from the recurrence relation, all predecessors' signals are given equal weight, so by the law of large numbers, actions converge to ω almost surely.When δ>1/2, ∑_k=0^∞δ^k>1, so ∑_j<iM_i,j is bounded above 1 for n large enough. So by Proposition <ref>, society does not learn correctly.The final case is δ<1/2. We show that ℙ[a_n≤1/2|ω=1] is bounded away from 0 for all n≥1, so (a_n) cannot converge in probability to ω.Without loss of generality, normalize to σ=1. From the recurrence relation for the coefficients b_n,i, it is easy to check that ã_n=2δã_n-1+s̃_n-δs̃_n-1 for each n. Evidently b_i+1,i=δ, so from recursion, b_n,i=(2δ)^n-i-1δ for i≤ n-1, b_n,n=1. So, ã_n=s̃_n+δ∑_j=0^n-2(2δ)^j·s̃_n-1-j, meaning ã_n|(ω=1)∼𝒩(1+δ1-(2δ)^n-1/1-2δ,1+δ^21-(4δ^2)^n-1/1-4δ^2).So ℙ[ã_n≤0|ω=1]≥Φ(-1-δ/1-2δ) for all n, which implies for all n, ℙ[a_n≤1/2|ω=1]≥Φ(-1-δ/1-2δ). §.§ Proof of Proposition <ref> We first state and prove a lemma that gives the ex ante distribution of agent n's log action.When ω=1, the log action of agent n on any weighted network is distributed as ã_n∼𝒩(2/σ^2b⃗_n_1,4/σ^2b⃗_n_2^2).By Proposition <ref>, ã_n=∑_i=1^nb_n,is̃_i. This is equal to ∑_i=1^n2b_n,is_i/σ^2 according to Lemma <ref>. Conditional on ω=1, (s_i) are i.i.d. 𝒩(1,σ^2) random variables, so ∑_i=1^n2b_n,i/σ^2s_i∼𝒩(2/σ^2∑_i=1^nb_n,i,4/σ^2∑_i=1^nb_n,i^2)=𝒩(2/σ^2b⃗_n_1,4/σ^2b⃗_n_2^2). Now we give the proof of Proposition <ref>.By Lemma <ref>, ã_n|(ω=1)∼𝒩(2/σ^2b⃗_n_1,4/σ^2b⃗_n_2^2). So using properties of the Gaussian distribution, ℙ[ã_n>0|ω=1]=Φ(2/σ^2b⃗_n_1/2/σb⃗_n_2)=Φ(1/σ·b⃗_n_1/b⃗_n_2).§.§ Proof of Proposition <ref> The numbers of paths from various agents to agent i satisfy the recurrence relation b_n,i=(1+q)b_n-1,i when n-i>1. By a simple computation, we find that ã_n=∑_i=1^n-1q(1+q)^n-i-1s̃_i+s̃_n.Since s̃_i are independent Gaussian random variables, our argument uses the fact that for n large, ã_n has the same sign as another Gaussian random variable, whose mean and variance we can compute.We first show that ã_n converges to -∞ or ∞ almost surely. Consider the random variable X_n(s⃗):=1/2∑_i=1^n-1(1+q)^-is̃_i,where s⃗:=(s_i)_i=1^∞ is the profile of private signal realizations. By a standard result, X_n(s⃗) converges almost surely to a random variable Y(s⃗) such that the conditional distribution of Y in each state of the world is Gaussian. For each n, ã_n(s⃗)=2q(1+q)^n-1· X_n(s⃗)+s̃_n. Since ∑_n=1^∞ℙ[s̃_n>n]<∞, by the Borel–Cantelli lemma, ℙ[s̃_n>n infinitely often]=0. So almost surely, lim_n→∞ã_n(s⃗)=lim_n→∞2q(1+q)^n-1· Y(s⃗)+s̃_n∈{-∞,∞}. This in turn shows that a_n converges to 0 or 1 almost surely.Now we show ℙ[a_n→0|ω=1]=Φ(-σ^-1√((q+2)/q)), which is the same probability as ℙ[ã_n→-∞|ω=1]. The random variable Y(s⃗) that X_n(s⃗) converges to a.s. has the distribution 𝒩(1/(σ^2q),1/(σ^2q(q+2))) when ω=1, and ã_n has the same sign as X_n(s⃗) with probability converging to 1 for n large. The distribution 𝒩(1/(σ^2q),1/(σ^2q(q+2))) assigns Φ(-σ^-1√((q+2)/q)) probability to the positive region. The symmetric argument holds for ω=0. §.§ Proof of Proposition <ref> First, we derive a closed-form expression for the probability that the nth agent thinks the correct state is more likely in the uniform weights network, conditional on ω=1. In the q-uniform weights network,ℙ[ã_n>0|ω=1]=Φ(1/σ·(1+q)^n-1·√((2+q))/√(2+q(1+q)^2n-2)).This probability is strictly increasing in n when 0<q≤1. From the proof of Proposition <ref>, we haveã_n=∑_i=1^n-1q(1+q)^n-i-1s̃_i+s̃_n where the different s̃_i's are conditionally independent given ω=1, with s̃_i|(ω=1)∼𝒩(2/σ^2,4/σ^2) from Lemma <ref>. Thus, the sum ã_n is conditionally Gaussian with a mean of 2/σ^2·[1+∑_i=1^n-1q(1+q)^n-i-1] =2/σ^2·[1+q·(1+q)^n-1-1/(1+q)-1] =2/σ^2·(1+q)^n-1 and a variance of 4/σ^2·[1+∑_i=1^n-1q^2(1+q)^2n-2i-2] =4/σ^2·[1+q^2·(1+q)^2n-2-1/(1+q)^2-1] =4/σ^2·2+(1+q)^2n-2q/2+q. Thus, 0 is 2/σ^2·(1+q)^n-1/√(4/σ^2·2+(1+q)^2n-2q/2+q)=1/σ·(1+q)^n-1·√((2+q))/√(2+q(1+q)^2n-2) standard deviations below the mean in the distribution of ã_n|(ω=1), so ℙ[ã_n>0|ω=1]=Φ(1/σ·(1+q)^n-1·√((2+q))/√(2+q(1+q)^2n-2)). To see that this expression is strictly increasing, let n≥1. Then, ℙ[ã_n+1>0|ω=1] =Φ(1/σ·(1+q)·(1+q)^n-1·√((2+q))/√(2+(1+q)^2· q(1+q)^2n-2)) >Φ(1/σ·(1+q)·(1+q)^n-1·√((2+q))/√((1+q)^2·2+(1+q)^2· q(1+q)^2n-2)) =Φ(1/σ·(1+q)^n-1·√((2+q))/√(2+q(1+q)^2n-2)) =ℙ[ã_n>0|ω=1] as desired.Now we give the proof of Proposition <ref>.Let ℙ[ã_2>0|ω=1] on the q^*-uniform weights network be denoted by p. Lemma <ref> implies ℙ[ã_n>0|ω=1] is strictly increasing in n on the q^*-uniform weights network, so p> and, furthermore, ℙ[ã_n>0|ω=1]≥ p for all n≥2 on the same network.The function q↦ Φ(1/σ·(1+q)^N-1·√((2+q))/√(2+q(1+q)^2N-2)) is continuous and equals Φ(1/σ) when q=0. So we may find a small enough q̅∈(0,q^*) so that whenever 0<q<q̅,Φ(1/σ·(1+q)^N-1·√((2+q))/√(2+q(1+q)^2N-2))<p.From the monotonicity result of Lemma <ref>, this also implies Φ(1/σ·(1+q)^n-1·√((2+q))/√(2+q(1+q)^2n-2))<p for all 2≤ n≤ N.§.§ Proof of Proposition <ref> Suppose we have two groups, and agents observe predecessors in the same group with weight q_s and predecessors in the other group with weight q_d. Then the coefficients b_n,i satisfy the recurrence relation b_n,i=q_db_n-1,i+(1+q_s)b_n-2,iwhen n-i>2. Since the network is translation invariant, b_n,i only depends on n-i. By a standard algebraic fact, there exist constants c_+,c_-,ζ_+,ζ_- (only depending on n-i) so that b_n,i=c_+ζ_+^n-i+c_-ζ_-^n-i,where ζ_± are the solutions to the polynomial x^2-q_dx-(1+q_s)=0 and c_+,c_- are constants that we can determine from b_2,1 and b_3,1. We compute ζ_±=q_d±√(4q_s+q_d+4)/2, where ζ_+>1 and ζ_-<0. By arguments analogous to those in the proof of Proposition <ref>, we may again establish that a_n converges to 0 or 1 almost surely. We now analyze the probability of mislearning.Since ζ_+>|ζ_-|, the exponential term with base ζ_+ dominates as n grows large. This shows c_+>0, since b_n,i counts the number of weighted paths in a network so it must be a positive number. This also shows that ℙ[ã_n<0|ω=1]→ℙ[∑_i=0^∞(ζ_+)^-is̃_i<0|ω=1] as n→∞. Conditional on ω=1, the sum ∑_i=0^∞(ζ_+)^-is̃_i has the distribution 𝒩(2/σ^2(ζ_+-1),4/σ^2(ζ_+-1)(ζ_++1)),so it is easy to show that the probability assigned to the negative region is increasing in ζ_+.Having shown that the probability of mislearning is monotonically increasing in ζ_+, we can take comparative statics: ∂ζ_+/∂ q_d=q_d/2√(4q_s+q_d+4)+1/2 and ∂ζ_+/∂ q_s=1/√(4q_s+q_d+4).It is easy to see that ∂ζ_+/∂ q_d>∂ζ_+/∂ q_s>0 for all q_s≥0 and q_d>0. §.§ Proof of Proposition <ref> Define κ_q to be a naive agent i's log-likelihood ratio of state ω=1 versus state ω=0 upon observing one neighbor j who picks action 1 with weight q. Then we have:κ_qln(ℙ[ω=1|s_j≥0]/ℙ[ω=0|s_j<0])>0,where ℙ is taken under i's beliefs about the conditional distributions of s_j under Assumption <ref>, that is s_j|(ω=1)∼𝒩(1,σ^2/q) and s_j|(ω=0)∼𝒩(-1,σ^2/q). In particular, this log likelihood κ_q is decreasing in q and so κ_q_s-κ_q_d>0 for q_s>q_d.By symmetry of the Gaussian distribution, the log-likelihood ratio after observing one neighbor who chooses action 0 with weight q is -κ_q.Suppose after 2n agents have moved, the actions taken so far involve every odd-numbered agent playing 1and every even-numbered agent playing 0. Then agent 2n+1 has a log-likelihood ratio of n(κ_q_s-κ_q_d) from her social observations. The probability that private signal s_2n+1 is so strongly in favor of ω=0 as to make 2n+1 play 0 is ϵ_nℙ[s_i∈ℝ:ln(ℙ[ω=1|s_i]/ℙ[ω=0|s_i])<-n(κ_q_s-κ_q_d)|ω=1]For the Gaussian distribution, ln(ℙ[ω=1|s_i]/ℙ[ω=0|s_i])=2s_i/σ^2, so ∑_n=1^∞ϵ_n=∑_n=1^∞Φ(-σ^2/2n(κ_q_s-κ_q_d);1,σ^2)<∞because the Gaussian distribution function tends to 0 faster than geometrically. This shows that there is apositive probability that every odd-numbered agent plays 1.By an analogous argument, there is also a positive probability that every even-numbered agent plays 0. In that argument we would use the fact that ∑_n=1^∞[1-Φ(σ^2/2n(κ_q_s-κ_q_d);1,σ^2)]<∞. ecta
http://arxiv.org/abs/1703.02105v7
{ "authors": [ "Krishna Dasaratha", "Kevin He" ], "categories": [ "q-fin.EC", "cs.SI", "econ.TH" ], "primary_category": "q-fin.EC", "published": "20170225064904", "title": "Network Structure and Naive Sequential Learning" }
Electrical control of the sign of the g-factor in a GaAs hole quantum point contact A. R. Hamilton^1 December 30, 2023 =================================================================================== Vehicle-to-infrastructure (V2I) communication may provide high data rates to vehicles via millimeter-wave (mmWave) microcellular networks. This paper uses stochastic geometry to analyze the coverage of urban mmWave microcellular networks. Prior workused apathloss model with a line-of-sight probability function based on randomly oriented buildings,to determine whether a link was line-of-sight or non-line-of-sight. In this paper, we use a pathloss model inspired by measurements, which uses a Manhattan distance pathloss model and accounts for differences in pathloss exponents and losses when turning corners. Inour model, streets are randomly located as a Manhattan Poisson line process (MPLP) and the base stations (BSs) are distributed according to a Poisson point process. Our model is well suited for urban microcellular networks where the BSs are deployed at street level. Based on this new approach, we derive the coverage probability under certain BS association rules to obtain closed-form solutions without much complexity. In addition, we draw two main conclusions from our work. First, non-line-of-sight BSs are not a major benefit for association or source of interference most of the time. Second, there is an ultra-dense regime where deploying active BSs does not enhance coverage. § INTRODUCTION Vehicle-to-infrastructure (V2I) communication offers the potential to enhance safety and efficiency in urban vehicular networks <cit.>. Combined with millimeter wave (mmWave) <cit.>, V2I has the potential to offer high data rates and low latency <cit.>, enabling massive data sharing among a great number and diversity of mobile devices in vehicular networks <cit.>. MmWave communication not only has access to larger bandwidths, it can also allow large yet very compact antenna arrays deployed at both the transmitter and receiver to provide high directional beamforming gains and low interference.Compared to channels at microwave frequencies (<6 GHz), however, mmWave channels are more sensitive to blockage losses, especially in urban streets where signals are blocked by high buildings, vehicles or pedestrians <cit.>, <cit.>, and sharp transitions from line-of-sight (LOS) to non-line-of-sight (NLOS) links are more common. [In the current paper, LOS is defined as “optical" line of sight between the transmitter and receiver location as the papers cited here. NLOS happens when the link between the transmitter and the receiver is blocked by obstructions, specifically in our paper, is blocked by urban buildings.]This motivates the study of mmWave microcellular network performance in the context of vehicular urban areas. §.§ Related Work Urban street model: Stochastic geometry has been used extensively to analyze performance in mmWave cellularnetworks <cit.>. BS and cellular user locations are modeled as Poisson point processes on a two-dimensional plane, based on which the coverage probablity of a typical cellular user is derived. Also, building blockages are considered as the main source differentiating LOS and NLOS links, with a few papers analyzing different building blockage models. XXX REFS. NEED TO ADD REFERENCES HERE, INCLUDING TUTORIAL PAPER WITH JEFF ON MMWAVE CELLULAR, HIGHLIGHT APPROACHES AND LIMITATIONS. LOOKING FOR 3-4 MORE SENTANCES AT THE MOST Unfortunately, prior work analyzing mmWave cellular networks in <cit.> employed a pathloss model with a LOS probability function based on Euclidean distance <cit.>, to determine whether a link was LOS or NLOS. This works well for randomly oriented buildings <cit.>, but does not properly model V2I networks where strong LOS interference may result from infrastructure co-located on the same street. Recent work has considered alternative topologies that may better model urban areas. XXX JUST BACCELLI PAPER, ANY OTHER RELATED PAPERS?? In <cit.>, an approach to determine LOS and NLOS BSs by approximating a LOS ball was proposed. The model was shown to be able to better approximate the LOS area than <cit.>. In <cit.>, three-dimensional Poisson buildings were modeled using Poisson processes to characterize the correlated shadowing effects in urban buildings. The idea was to add one more dimension to the Manhattan Poisson line processes (MPLP), by modeling the floor locations as Poisson process. This allowed an exact characterization of coverage of indoor urban cellular networks.In <cit.>, a stochastic geometry model in a Manhattan type network was analyzed, since it is a tractable yet realistic model for Manhattan type urban streets. The urban streets were modeled as one-dimensional MPLP and the coverage probability was derived considering the penetration of signal through buildings. Unfortunately, the results in <cit.> used a pathloss model mainly considering the penetration effects of signals through urban buildings, with a fixed loss for each penetration. This is not applicable for mmWave systems where penetration loss is high. In this paper, we also use the MPLP for modeling the urban street distribution, but combined with a mmWave-specific channel model. Urban mmWave channel model: There is a vast body of literature concerning mmWave channel modeling in urban areas, see, e.g., <cit.> and references therein. One of the key characteristics of urban environment is the high density of streets and high-rise buildings. Since mmWave signals aresensitive to blockage, which induces significant signal attenuation, LOS and NLOS links can have sharply different pathloss exponents, as was also shown in numerous measurements <cit.>, <cit.>, <cit.>, and is reflected in the standardized channel models <cit.>.Investigations in a variety of environments showed that, in general, penetration loss increases with carrier frequency.For modern buildings with steel concrete and energy saving windows, in particular, penetration through just one wall can incur losses in the order of 30 dB; therefore, propagation through buildings is not a relevant effect in mmWave Manhattan type urban environments <cit.>. In <cit.>,a spatially consistent pathloss model was proposed for urban mmWave channels in microcells. Based on ray tracing, it was shown that the pathloss exponents differ from street to street and should be modeled as a function of both the street orientation and the absolute location of the BS and user equipment (UE)[Henceforth we assume a downlink so that receiver and UE can be used exchangeably.]. Hence, the signal is seen as propagating along different streets, with diffraction effects happening at the corner, instead of penetrating through the urban buildings. The pathloss is summed up by the individual pathloss on different segments of the propagation paths,incorporating an additional loss at each corner. This shows that theEuclidean distance might not be a good measure to characterize the pathloss effects in urban microcell networks at mmWave.In this paper, we adopt a modified pathloss modelsimilar to <cit.> based on the Manhattan distance, which enables tractable analysis while still retaining the key features of the mmWave microcellular channel. §.§ Contributions In this paper, we develop a tractable framework to characterize the downlink coverage performance of urban mmWave vehicular networks. Specifically, we consider snapshots of the urban microcellular network, without modeling vehicle mobility. This reduces the network to an urban mmWave microcellular networks. We model the location of urban streets by a MPLP. The width of the street is neglected, and herein the blockage effects of vehicles are not considered in the analysis. We extend our previous paper <cit.> to account for large antenna arrays and directional beamforming at mmWave. We use a modification of the sectorized antenna model for tractable analysis <cit.><cit.> and apply the new pathloss model from <cit.>. The pathloss model is characterized by the Manhattan distance of the propagation link, which, with MPLP street modeling, yields tractable results for coverage analysis. Based on our model, we analyze coverage of randomly located UEs on the roads formed by the lines, which is different from the conventional approach where coverage is analyzed conditioned on the links being outdoors <cit.>. We adopt a new procedure in the calculation of coverage probability, compared to the previous work <cit.>.We analyze the coverage probability by first computing the cumulative distribution function (CDF) of associated BS link gain and then the coverage probability conditioned on the associated link gain. By averaging over the conditioned received signal power, we obtain simple but accurate expression of coverage probability. Compared to <cit.>, this paper also includes the following contributions. Based on the coverage probability, we obtain insights concerning the scaling laws of coverage probability with street and BS intensities, the sensitivity of coverage to the channel conditions and the effects of LOS/NLOS interference. Also, we derive closed-form expressions of the LOS BS association probability, under different channel conditions. We then use the map data of the streets in Chicago from OpenStreetMap <cit.> and extract it using the Geographical Information System (GIS) application QGIS <cit.>. This is used to compare the ergodic achievable rate of realistic streets, MPLP street model and fixed grid models. The comparison shows that the MPLP based analysis is valid for outdoor microcell urban networks at mmWave. § SYSTEM MODEL In this section, we explain the key assumptions and models adopted in this paper. First, we explain the street model in urban vehicular networks. Then, we present a tractable form of the pathloss model of mmWave microcells based on Manhattan distance from <cit.>. We introduce a modified mmWave sectorizedantenna pattern that is used for our analysis. Lastly, we formulate the signal-to-interference-plus-noise ratio (SINR) of the receiver and demonstrate the rule of the strongest propagation path. §.§ Network model-MPLP We show in Fig. <ref> an illustrative snapshot of the Manhattan network in a Cartesian coordinate system.Without loss of generality, a typical receiver is placed at the origin O, and the streets are assumed to be either perfectly horizontal or vertical in the coordinate system. We call the street where the receiver is located at as the typical street, i.e., the x-axis. We refer to otherhorizontal and vertical streets respectively as parallel streets and cross streets. These streets are generated from two independent one-dimensional homogeneous Poisson point processes (PPP) Ψ_x and Ψ_y, with identical street intensity λ_S. Under the current coordinate system, we define the set of parallel streets as ⋃_y_i∈Ψ_y L_y(y_i), where ⋃ denotes union of sets, and L_y(y_i) denotes the parallel street with intercept (location) at y_i. The set of the cross streets is defined as ⋃_x_j∈Ψ_x L_x(x_j), with L_x(x_j) similarly defined as the cross street having intercept x_j on the x-axis. By Slivnyak's theorem [15], [33], the typical street y =0 is added to the process. BSs are deployed at the street level, and are distributed on each cross, parallel, and the typical street as independent one-dimensional homogeneous PPPs. Similar to the naming convention for the streets, we name the BSs on the typical streets as typical BSs, and cross BSs and parallel BSs on the cross and parallel streets respectively. §.§ Pathloss model We adopt a pathloss model that is based on the Manhattan distance instead of Euclidean distance. The model is similar to<cit.>, but uses several modifications to provide tractability. Ray tracing shows that in an urban mmWave microcell,Euclidean distance might not be a dominant parameter in pathloss modeling. Since the penetration through urban building walls is negligible at mmWave, the signal detours its way along the streets in urban canyons and changes its directions by diffractions on the buildings at intersections. Therefore, instead of the direct Euclidean distance between the BS and the receiver, the street orientation relative to the BS location, and the absolute positions of the BS and receiver are the key parameters to determine the pathloss. It is shown by the ray tracing results that a way to model the net pathloss of a propagation link in urban mmWave microcells is to add up the pathloss on different segments of the propagation paths, with an additional loss when the waves couple into a new street canyon. The propagation path may be thought of as segmented, with the signal changing directions to find LOS paths, circumventing building blockage. We assume that there are in total of M segments along the propagation paths, i.e., M-1 corners where signal changes directions. Note that the value of M depends on the actual position of the BS and the receiver.The individual length of the i-th segment is denoted as d_i, the pathloss exponent on the i-th segment is α_i, and the corner loss at the corner of the ith street segment and i+1-th segment is Δ (in decibel scale), where we assume corner losses at different corners are identical. We define the LOS segment as the first segment of the propagation path from the BS and NLOS segment as the remaining segments on the propagation path. It should be noted that the LOS and NLOS defined for the segments are only indicating the order of different segments of the propagation paths, which is different from the definition of LOS/NLOS paths in traditional representations. We assume that LOS segments on different streets share the same pathloss exponent α_L, while the pathloss exponent for NLOS segments is α_N. Notice that the equation is not “symmetric", i.e., the street segment that has LOS to the BS has a pathloss coefficient that is different from the one that has LOS to the UE; such a situation might occur due to the different heights of UE and BS. To clarify, the pathloss does not hold for vehicle-to-vehicle channel modeling, since the model is asymmetric. To conclude, the pathloss in the decibel scale is defined as follows PL_dB = 10( α_Llog_10d_1 + α_N∑_i=2^Mlog_10d_i) + (M-1)Δ. With this Manhattan distance based pathloss model,we can classify the BSs into three categories, as illustrated in Fig. <ref>:i) BSs on the typical street (typical BSs) that have one direct propagation path to the typical receiver; ii) NLOS BSs on the cross streets (cross BSs) that have a propagation path consisting of a LOS segment (green path d_1) and NLOS segment (green path d_2) to the typical receiver, and iii) NLOS BSs on the parallel streets (parallel BSs) that have a propagation path consisting of a LOS segment (red path d_1) and two NLOS segments (red path d_2, d_3). The analysis of the strongest path of different BSs will be provided in Section <ref>. This pathloss model also bears a strong relationship to <cit.>, which considered the pathloss model in urban microcells where waves are coupled at the street corners with different angles. §.§ Sectorized antenna model To leverage array gain, directional beamforming by multiple antennas is performed at mmWave BSs. For simplicity, we assume the receiver has an omni-directional antenna, and the BSs are equipped with N_t transmit antennas. We adopt a sectorized antenna model for the BS <cit.>, <cit.>, with the main lobe gain denoted as G and the side-lobe gain as g.The beamwidth of the main lobe is θ, as shown as the red fan in Fig. <ref> and all the other directions outside the main lobe are assumed to be in the side lobe (shown in the blue circle). For a uniform planar antenna array, the main lobe gain can be approximated by G = N_t, which isthe maximum power gain that can be supported with N_t-element antenna array. The side-lobe gain is evaluated by g = √(N_t)-√(3)/2πN_tsin(√(3)/2√(N_t))/√(N_t) - √(3)/2πsin(√(3)/2√(N_t)), which is calculated to satisfy the following antenna equation for constant total radiated power <cit.>, <cit.>, ∫_-π^π∫_-π/2^π/2 G(ϕ, ψ)cos(ψ)dψ dϕ = 4π, and the beamwidth is θ = √(3)/√(N_t). The beamforming gain 𝒢 is therefore formulated as 𝒢 =G,if the receiver lies inside the main lobe, g,if the receiver lies inside the side lobe. Since the beamforming direction of the BS is assumed to be uniformly distributed in (0, 2π), the beamforming gain 𝒢 of one typical BS with LOS visibility to the typical receiver is a Bernoulli random variable, so that 𝒢 = A G + (1-A) g, where A= 𝕀(p), p= θ/2π. §.§ Signal-to-interference-plus-noise ratio (SINR) SINR coverage analysis is important to determine outage holes and ergodic throughput of wireless networks. While these metrics in the context of mmWave-based vehicular networks depend on both mobility and the blockage effects due to the vehicles, in this paper we simply consider snapshots of the urban microcelluar network and look at the distribution of the instantaneous SINR. This approach is taken to confirm the analytic tractability of the pathloss model described in Section <ref>, which captures the blockage and shadowing effects due to buildings and accounts for the geometry of streets in an urban environment. In this section, we will explain the key assumptions of BS association and the definition of the interference. §.§.§ BS association In our model, as mentioned in Section <ref>, we assume the BSs deploy directional beamforming to exploit antenna gain, while at the receiver side, the antenna is omni-directional. During the cell discovery and BS association process, we assume all BSs do exhaustive beam search over the entire beam space by beam sweeping, each at an individual time slot. Based on the reference signal received power (RSRP) of each beam, the receiver can determine the serving BS and the associated beam by selecting the strongest RSRP. After exhaustive beam sweeping, the receiver is always aligned with the main lobe, therefore the antenna gain of the associated BS is G all the time. Therefore, the receiver is simply associated to the BS with the smallest pathloss defined in (<ref>), without including extra antenna gain. §.§.§ InterferenceFrom the BS association rule, the receiver is associated to the BS with the smallest pathloss, i.e., the largest path gain, which we denote as u.Therefore, interference arises from other BSs whose path gains are smaller than u, with an extra beamforming gain 𝒢 added on. Given the orientation of the main beam (towards the desired user), other BSs could either point the main lobe or the side lobe towards the referenced (typical) receiver, based on the sectorized antenna model. Therefore, the beamforming gain of the interference 𝒢 is random, and is represented as 𝒢 = A G + (1-A) g, where A = 𝕀(p) and p is the probability that the interference link from the BS has beamforming gain of G, and 𝕀(·) is the Bernoulli function. §.§.§ Formulation of SINRBased on the pathloss model in Section <ref>, there are three types of BSs to analyze: typical/cross/parallel BSs. To formulate the SINR, we first make the following assumption of the BS association rule. We assume perfect beam sweeping for the(<ref>). We use Φ_T to denote the set of LOS link distances x_T from thetypical BSs to the receiver. The set of lengths of the horizontal and vertical links, x_C (d_1 in green, Fig. 1) and y_C (d_2 in green, Fig. 1), constituting the propagation path from the cross BSs is denoted as Φ_C. Similarly, Φ_P is used to denote the set of distances (x_P, y_P,z_P) (d_3, d_2, d_1 in red) corresponding to the propagation path from parallel BSs (see Fig. <ref>). To simplify the demonstration, we define the path gain of the LOS and NLOS segment respectively as ℓ_L (x)=x^-α_L, and ℓ_N (x)= cx^-α_N, where x is the length of the propagation segment. The corner loss term c = 10^-Δ/10 in the total path gain expression is also captured along with the propagation loss associated with each NLOS segment in(<ref>), with α_N denoting the NLOS pathloss exponent. We denote h_o as the small scale fading of the typical receiver o from the associated BS and h_i as the small scale fading of the ith BS in the Poisson point processes. 𝒢_i is a beamforming gain associated with each interfering BS, defined in (<ref>), and N_0 is defined as the noise variance. Φ_T', Φ_C' and Φ_P' are the set of segment lengths of the interfering BSs. Conditioning on the associated BS link gain u (which includes both the path gain PL in (<ref>) and the antenna beamforming gain, which is always G), the SINR can be formulated as follows, in terms of interference components, respectively from the typical BSs I_ϕ_T, cross BSs I_ϕ_C and parallel BSs I_ϕ_P, SINR = h_o u/N_0 + I_ϕ_T(o) + I_ϕ_C(o) + I_ϕ_P(o),with  I_ϕ_T(o)= ∑_x^i_T∈Φ_T'𝒢_ih_i ℓ_L (x^i_T), I_ϕ_C(o)= ∑_(x^i_C,y^i_C)∈Φ_C'𝒢_ih_i ℓ_N(x^i_C)ℓ_L(y^i_C),and  I_ϕ_P(o)= ∑_(x^i_P, y^i_P, z^i_P)∈Φ_P'𝒢_i h_iℓ_N(x^i_P) ℓ_N(y^i_P)ℓ_L(z^i_P). Based on the assumption in Section <ref>, and conditioning on the associated BS path gain as u, we have the following constraints for the sets of interfering BSs' segment lengths Φ_T', Φ_C' and Φ_P' in (<ref>) – (<ref>) as Φ_T'= {x_T∈Φ_T| ℓ_L(x_T)<u},Φ_C'= {(x_C,y_C)∈Φ_C| ℓ_N(x_C)ℓ_L(y_C) <u},and Φ_P'= {(x_P,y_P, z_P)∈Φ_P| ℓ_N(x_P) ℓ_N(y_P)ℓ_L(z_P) <u}. The above constraints are based on the assumption that perfect beam sweeping is done for each surrounding BS in the initial access, which leads to (<ref>) – (<ref>). §.§ Analysis of strongest pathGiven a BS at fixed location (either a typical, cross or parallel BS), the received power from the BS is still not clear, even though we have already defined our pathloss model in (<ref>). This is because, first, the Manhattan pathloss model bears huge differences with the Euclidean distance based pathloss model. Secondly, given a BS location, there could be multiple paths for the signal to reach the receiver within the grid-type Manhattan city. Since we assume the antenna pattern at the BS is sectorized, there exists radiated power to all directions, with different antenna gains With different paths routed for a signal radiated from all directions, the received power comes from different paths. To make it tractable, we make the following assumption for the analysis herein. For analysis, we only consider the path from BS to the typical receiver with the largest received power. To be the strongest path, the path should have i) shorter individual path segment lengths, ii) fewer individual segments, hence fewer corners and smaller corner loss, since pathloss is calculated by multiplying individual segment pathloss and one extra multiplication might reduce the path gain by orders of magnitude, iii) larger beamforming gain. (only for the analysis of the interfering link.Because for the BS association case, the receiver is always associated with the main lobe, with identical beamforming gain of G). The strongest path for the BS association is simply the path with the smallest pathloss, since the receiver is always associated with the main lobe of the beam, with an identical beamforming gain as G. The strongest path for the interfering link analysis, however, is not necessarily the path with the smallest pathloss, due to the existence ofbeamforming gain. Next, we demonstrate some different cases of the relative location of the receiver to the BS, in terms of the strongest receiver path. Fig. <ref> and Fig. <ref> illustrate the potential strongest paths of a typical and a cross BS. In each of the cases, there is one direct path which has fewer corners and one detoured path which detours its way before reaching the receiver. For typical BSs, the detoured path has four more corners than the direct path; while for the cross BSs, there are two more corners. Each corner introduces an approximately extra 20dB loss, which is much more significant than the effects compensated by the beamforming gain difference. Therefore, even if the departure direction of the direct path lies inside the side lobe, the strongest path should still be the direct path. For the parallel BSs, both the detoured and direct paths have two corners, which makes it hard to identify the strongest path (see Fig. <ref>). In addition, the BSs are categorized to two types. It could either be in the same block as the receiver (e.g., BS 1) or different block as the receiver (e.g., BS 2), as shown in Fig. <ref>. For the same block BS,the strongest path could either be the green dashed line or the green solid line. For the different block BS, however, the strongest path could traverse any of the cross streets and could point either left for right. Section <ref> assumes that the strongest path travels the LOS segment towards the receiver, the thinning rule follows from the typical and cross BSs. The assumption is that only when the main beam points towards the corner in the direction of the receiver of the LOS segment, the beamforming gain can be main lobe gain G. To make the analysis tractable, we make the following assumption. For the strongest path of the parallel BSs, the signal travels along the LOS segment (first segment of the path) in the direction towards the receiver, rather than away from it. With this assumption, to find the strongest path for the parallel BS at different blocks as the receiver, we provide the following proposition. The strongest propagation path from a parallel BS is via either the cross street Θ_R closest to the receiver or Θ_B closest to the BS. Conditioning on the location of the parallel BS, the segment y_P and the corner loss 2Δ of all propagation paths are the same, hence, the pathloss on the vertical link and the two corner losses can be taken out while formulating the following optimization problem. For the interfering BS, since 𝒢 is a random variable taking values of G or g, we have 𝒢≤ G. Hence, themaximum path gain of the parallel BS G_p can be upper bounded by G_P ≤ G - 2Δ - 10 α_Nlog_10y_P + 10G_M≤ G - 2Δ - 10 α_Nlog_10y_P+ 10max{G_M}. where G_M =-α_Nlog_10x_P - α_Llog_10z_P, We then formulate the optimization problem of G_M as x_P, z_P∈(0,W)maximize-α_Nlog_10 x_P - α_Llog_10 z_Psubject to x_P + z_P = W. The objective function can be expressed as P(x) = -α_Nlog x -α_Llog (W-x)  , x∈(0,W),whose second order derivative is P”(x) = α_N/x^2 + α_L/(W-x)^2. The second order derivative of P(x) is positive for all α_L, α_N, and W, which means P(x) is convex. Denoting the distance from Θ_R to the receiver as x_1 and the distance from Θ_B to the receiver as x_2, and using the convexity of P(x), we have P(λ x_1+ (1-λ) x_2) <λ P(x_1) + (1-λ)P(x_2) <max{P(x_1), P(x_2)} ∀λ∈(0,1)  and  x_1, x_2∈(0,W). In (<ref>),P(λ x_1+ (1-λ)x_2)parameterizes all path gains of the propagation paths via any cross street lying between Θ_R and Θ_B, with different values of λ selected. From the second inequality in (<ref>), all these propagation paths have smaller path gain than that going through the streets specified in this proposition, which concludes the proof. Since the pathloss exponent of the segment z_P is α_L and that of the segment x_P is α_N, with α_L<α_N, it is intuitive that the strongest path is more likely to be via the street closest to the receiver, i.e., Θ_R. To conclude the discussion on the uniqueness of the propagation path in the system model considered in this paper, we demonstrated that for both the typical and cross BSs, the propagation path is unique and also easy to identify based on the strongest path analysis above. For the parallel BS,irrespective of whether the BS is located in the same block as the receiver, there are only two potential paths to be the strongest, and for analysis, we choose the path which traverses the cross street that is closest to the receiver. § COVERAGE ANALYSIS The coverage probability serves as an important metric in evaluating system performance, since it is closely related to ergodic rate and throughput outage. In this section, we compute the coverage probability of a typical receiver in the MPLP microcellular network. First, we explain the independent thinning of the BSs considering the sectorized beam pattern of the mmWave BSs. Then, we analyze the CDF of the associated BS link gain based on the assumption that the receiver is associated to the closest BS (with smallest pathloss). In addition, we derive an accurate and conciseexpression of the coverage probability. Finally, we examine the effects of the various components that contribute to interference mmWave microcellular networks. §.§ Independent thinning of BSs Based on the sectorized antenna model in Section <ref> and the properties of PPP, the BSs are independently thinned to generate two independent PPPs of BSs with antenna gain of G and g <cit.>. We define p_T as the thinning probability, and λ_B is the density of all active BSs deployed on the road side. After independent thinning, the densities of thinned BSs with antenna gain of G and g are respectively λ_B p_T and λ_B(1-p_T). For the typical BSs, the thinning probability is p_T which equals to the probability that the receiver lies inside the main lobe, as defined in (<ref>). For the cross BSs and the parallel BSs, we assume that only the BSs pointing towards the corner where the diffraction happens have beamforming gain as G. Hence, cross and parallel BSs have identical thinning probability as that of typical BSs. For the parallel BSs, even though the uniqueness analysis of strongest path in Section <ref> is not as straightforward as the typical and cross street BSs, the thinning probability remains the same. Section <ref> assumes that the strongest path travels the LOS segment towards the receiver, and the thinning rule follows that of the typical and cross BSs. The assumption is that only when the main beam points towards the corner in the direction of the receiver, the beamforming gain can be G. To conclude, the thinning probabilities for three types of BSs (typical, cross and parallel) are identical, which are equal to the probability that the interfering BS has a beamforming gain of Gp_T=p, where p is given in(<ref>).The value of p is hard to evaluate from a physical point of view, because propagation is dominantly down a street canyon, and it is dependent on the distribution of the interfering BS beam direction and multiple reflections along the street canyon. We can actually use any value for “p" that occurs in practice. To make the exposition more clear, we pick the value as p_T = p = θ/2π, where θ is the beamwidth, under the assumption that the main lobe of the interfering BSs is uniformly distributed in the angular domain of (0, 2π). §.§ Distribution of associated BS link gain To simplify SINR coverage analysis, we assume all links (association/interfering) experience independent and identically distributed (I.I.D.) Rayleigh fading with mean 1, h∼exp(1). We denote the normalized transmit power P_B = 1 and represent the noise variance by N_0. Since the SINR expression in (<ref>) is conditioned on the associated BS link gain u, we first analyze the distribution of u. Based on strongest BS association law, the receiver can be associated to either a typical/cross or parallel BS. The following lemma provides the cumulative density function (CDF) of the associated BS link gain of the typical/cross/parallel BS respectively. The CDFs of the associated BS link gain of the typical BSs u_1 = max_(x_T∈Φ_T){ℓ_L(x_T)}, cross BSs u_2= max_(x_C, y_C)∈Φ_C{ℓ_N (x_C)ℓ_L(y_C)} and parallel BSs u_3 =max_(x_P, y_P, z_P)∈Φ_P{ℓ_N(x_P)ℓ_N(y_P)ℓ_L(x_P)} are approximated as F_u_T(u)= exp(-γ_Tλ_Bu^-1/α_L), F_u_C(u)=exp(-γ_Cλ_B^α_L/α_T u^-1/α_N), F_u_P(u) ≈2√(2γ_Pλ_Sλ_B^α_L/α_N u^-1/α_N)K_1(2√(2γ_Pλ_Sλ_B^α_L/α_Nu^-1/α_N)), where γ_T = 2G^1/α_L , γ_C= 2^1+α_L/α_Nλ_S(cG)^1/α_NΓ(1-α_L/α_N), γ_P = √(16 λ_S^2 Γ(1-α_L/α_N)(2λ_B)^α_L/α_N (Gc^2)^1/α_L u^-1/α_N)γ_P = γ_C c^1/α_N, and K_1(·) is the 1-st ordermodified Bessel's function of the second kind <cit.>. See Appendix <ref>. Based on properties of the modified Bessel function, when the argument μ of K_1(μ) becomes small, it can be approximated as <cit.> K_1(μ)∼μ^-1. Since the argument of the modified Bessel function in (<ref>) scales with λ_S^2 λ_B^α_L/α_N, and the corner loss term c further reduces the value, so that the approximation in (<ref>) applies. Consequently, we can approximate (<ref>) as F_u_P(u) ≈2√(2γ_Pλ_Sλ_B^α_L/α_N u^-1/α_N)(2√(2γ_Pλ_Sλ_B^α_L/α_N u^-1/α_N))^-1= 1 , which implies that, generally, the largest gain from a parallel BS is small, i.e., the probability of being associated with a parallel BS is negligible. Using Lemma <ref>, the CDF of the associated BS link gain U = max{u_T, u_C, u_P} can be evaluated as F_U(u) = ℙ(max{u_T, u_C, u_P}<u) =^(a)ℙ(max{u_T}<u) ℙ(max{u_C}<u) ℙ(max{u_P}<u)≈^(b)exp(-γ_Tλ_B u^-1/α_L)exp(-γ_Cλ_B^α_L/α_N u^-1/α_N), where (a) is based on the fact that the locations of the typical/cross/parallel BSs are mutually independent, (b) follows the results of Lemma <ref> and the observation that the association with parallel BSs is negligible. Fig. <ref> compares the numerically evaluated CDF of the associated BS link gain for the following cases: i) association only with typical BSs, ii) with typical/cross BSs, and iii) considering all association cases, against the theoretical result given in (<ref>). The simulation parameters we use are summarized in Table <ref>. The parameters are applicable to all of the following simulation results, unless stated otherwise. It is seen that the analytic result matches well with the numerical result. It can also be seen that the empirical CDF curves obtained with and without the association with the parallel BSs coincide. This verifies the analysis in Lemma <ref> and the subsequent approximation for largest gain seen by parallel BSs. Also, the curves show that the cross BSs association is also small compared to the typical BSs association, with the given simulation parameters. The simulation shows that LOS association with the typical BSs is dominant in the urban mmWave microcellular networks. §.§ Coverage probability In this section, we derive a closed-form expression for the coverage probability p_c(u, T) conditioned on the associated BS link gain as u. The coverage probability conditioned on u is defined as p_c(u,T) = ℙ(SINR >T| u). Using (<ref>) – (<ref>), (<ref>) can be expanded in terms of the Laplace transforms of interference conditioned on u and noise as follows. p_c( u, T) =ℙ(h>Tu^-1(N_0+I_ϕ_T(o) + I_ϕ_C(o)+I_ϕ_P(o)))=^(a)exp(-Tu^-1N_0)ℒ_I_ϕ_T(Tu^-1)ℒ_I_ϕ_C+I_ϕ_P(Tu^-1), where (a) is based on the assumption of I.I.D. Rayleigh fading channels, and ℒ_(·) is the Laplace transform (LT) of random variable (·). Note that we cannot completely decouple the interference terms since the propagation links from the cross and parallel BSs could potentially share the same path segments, thus making their individual interference not independent. To analyze the problem, we start by examining the parallel BS interference. The LT of the interference from the parallel BSs I_ϕ_ P is upper bounded by ℒ_I_ϕ_P(T, u)⪆ 2√(2γ_Pλ_Sλ_B^α_L/α_Nϱ(T)^α_L/α_N u^-1/α_N)× K_1(2√(2γ_Pλ_Sλ_B^α_L/α_Nϱ(T)^α_L/α_Nu^-1/α_N))≈ 1, where γ_P is defined in (<ref>), and ϱ(t)= ∫_1^∞1/1+t^-1μ^α_Ldμ. The proof follows from the proof of Lemma <ref> given in Appendix <ref>, and is provided in Appendix <ref>. Since the lower bound of LT of the parallel interference evaluates to 1 , which indicates that the interference from parallel BSs is small enough to be neglected, i.e., I_ϕ_P≈ 0.Hence, the correlation of cross and parallel interference can be neglected and the coverage probability in (<ref>) can be reformulated as p_c( u, T) ≈exp(-Tu^-1N_0)ℒ_I_ϕ_T(Tu^-1)ℒ_I_ϕ_C(Tu^-1), which is derived in the following Theorem. The coverage probability conditioned on the channel gain u of the associated link is p_c(u, T)≈exp(-β_1 u^-1)exp(-β_2λ_B u^-1/α_L)×exp(-β_3λ_B^α_L/α_Nu^-1/α_N), where β_1 = TN_0, β_2=γ_T(p_Tϱ(T)+(1-p_T)ϱ(Tg/G)),β_3= γ_C(p_T^α_L/α_Nϱ(T)^α_L/α_N+(1-p_T)^α_L/α_Nϱ(Tg/G)^α_L/α_N), and ϱ(·) is defined in (<ref>). See Appendix <ref>. Using Theorem <ref> and the distribution of the associated BS link gain in (<ref>), the SINR coverage probability can be evaluated as P_c(T) = ∫_0^∞ p_c(u,T) f_U(u)du, where p_c(u, T) is provided in (<ref>), and the probability density function (PDF) f_U(u) can be obtained from the derivative of the CDF derived in(<ref>). Though the coverage probability serves as an important metric in evaluating performance, it is not sufficient to characterize how much data rate the system can support. We next use a simplified definition of throughput 𝒯(β) to quantify the data rate, 𝒯 (η)= log_2(1+η)P_c(η), where η is the SINR threshold and P_c(η) is the coverage probability given in (<ref>). §.§ The effect of LOS and NLOS interferers In Proposition <ref>, we showed that the parallel BSs interference can be neglected in the analysis. In this section, we further compare the effects of typical interference I_ϕ_T and cross interference I_ϕ_C. For tractable analysis, we assume the receiver is associated to the typical BS, so that we can have simpler associated BS link gain distribution. The analysis is based on the application of Jensen's inequality to the individual LT of I_ϕ_T and I_ϕ_C. From Theorem 1, by unconditioning the associated BS link gain u, the LT of the interference of BSs on the typical street is ℒ_I_ϕ_T(T) = 𝔼_u[exp(-β_2λ_Bu^- 1/α_L)] and the LT of the interference due to the NLOS BSs on the cross streets is ℒ_I_ϕ_C(T) = 𝔼_u[exp(-β_3(λ_Bu^-1/α_L)^α_L/α_N)]. Define two convex functions φ_1(u) = exp(-u) and φ_2(u) = exp(-u^α_L/α_N). Since we assume the BS is associated to the typical BS in this case, the CDF of the associated BS link gain u becomes F(u) = exp(-γ_Tλ_B u^-1/α_L).Based on the CDF of u given above in (<ref>), we can derive the expectation of u^-1/α_L as 𝔼_u[u^-1/α_L] = 1/γ_Tλ_B, hence, by Jensen's inequality, the lower bound of ℒ_I_ϕ_T(T) becomes ℒ_I_ϕ_T(T)≥ℒ^LB_I_ϕ_T(T) = exp(-β_2/γ_T) = exp(-p_Tϱ(T)-(1-p_T) ϱ(Tg/G)).Similarly, we have derive the expectation of (λ_B u^-1/α_L)^α_L/α_N = λ_B^α_L/α_N u^-1/α_N as 𝔼_u[(λ_B u^-1/α_L)^α_L/α_N] = (1/γ_T)^α_L/α_NΓ(1+α_L/α_N), with the lower bound of ℒ_I_ϕ_C(T) evaluated as ℒ^LB_I_ϕ_C(T) = exp(-(1/γ_T)^α_L/α_Nβ_3 Γ(1+α_L/α_N)) = exp(-2λ_Sc^1/α_NΓ(1-α_L/α_N)Γ(1+α_L/α_N) ε). where we denote ε as ε = (p_T^α_L/α_Nϱ(T)^α_L/α_N+(1-p_T)^α_L/α_Nϱ(Tg/G)^α_L/α_N).The argument inside the exponential function of (<ref>) scales with λ_S and c^1/α_N, where we have λ_S≪1 and c≪ 1. The inside argument is therefore effectively small. Another factor that might influence the cross street interference is the ratio between the pathloss exponents of the LOS/NLOS segments r = α_L/α_N. Generally, when α_N is much larger than α_L, Γ(1-α_L/α_N)Γ(1+α_L/α_N) is not that big, which leads to ℒ^LB_I_ϕ_T(T)≪ℒ^LB_I_ϕ_C(T)≈ 1. The lower bound is shown to be fairly close to 1 but much larger than the lower bound of the LT of the typical interference. This indicates that in this case, the cross interference is much smaller than the typical interference, and also is negligible. When α_N is very close to α_L, however, Γ(1-α_L/α_N)Γ(1+α_L/α_N) can be very large, which averages out the effects of small λ_S and c^1/α_N. In this case, cross interference can also contribute significantly in some certain urban canyons, where α_N→α_L. This scaling law also leads to an intuitive insight that when the street intensity increases, the effects by cross BS interference grow larger. From (<ref>), it can be seen that the lower bound of LT ℒ^LB_I_ϕ_C(T) scales exponentially with-λ_S c^1/α_N, andΓ(1-α_L/α_N)Γ(1+α_L/α_N). When α_N is large, Fig. <ref> gives a comparison between the analytic and simulation results of the coverage probability with different selections of α_N, and the cases when considering no interference (noise only), considering interference from only typical BSs, from both typical and cross BS interference, and from all interference. When α_N = 7, it is shown from the first five curves in the legend that the coverage probability curves with different interference components almost overlap. This verifies the corresponding proof in Proposition <ref> that the parallel interference can be neglected, and the validity of Jensen's inequality lower bound analysis in (<ref>)and (<ref>). ForFig. <ref> and Fig. <ref>, we set the corner loss as Δ = 20dB. It will also be shown in Section <ref> that with the corner loss ranging from 30 dB to 0 dB (no corner loss case), the coverage probability does not vary significantly. For the black curve pair, where α_N = 2.51→α_L = 2.5, and the green curve pair α_N= 2.52,there do exist certain differences between the coverage probability considering only the typical interference and considering also the cross interference, even though when α_N = 2.52, the difference becomes small already. We then choose the value α_N = 3, and it is shown that the coverage probability curves almost coincide with that of α_N = 7. Hence, we can conclude that under the Manhattan distance based pathloss model, α_N does influence the contribution of cross street BS interference to the coverage probability. In most of the cases, the NLOS interference (from both cross and parallel BSs) is negligible; when and only when α_N→α_L, the cross interference becomes significant enough to have an impact on the coverage probability. The effect of different selection of α_N can also be observed from the parameter γ_C in (<ref>), which scales with Γ(1 - α_L/α_N), too. In the case when α_N→α_L, the absolute value of the argument inside the exponential function representing the CDF of maximum cross BS power becomes large, which makes the CDF grow smaller, hence making it easier to be associated with a cross BS. For the following analysis, for ease of explanation, we adopt the pathloss exponent as α_N = 7, which is the value recorded the measurement results <cit.>, <cit.>. § SCALING LAWS WITH NETWORK DENSITIES In this section, we analyze the scaling laws of the coverage probability and the association probability with the network densities, i.e., street intensity λ_S and BS intensity λ_B. We apply tight approximations to the coverage probability and reveal interesting interplay between the performance and network deployment. §.§ Scaling laws for coverage probability In this section, we focus on answering the following questions: i) how densely should BSs be deployed inurban streets to maximize coverage at a minimum cost? ii) how does the coverage change for different densities in different cities? §.§.§ Scaling law with BS intensity The interference limited scenario targets an asymptotic case, wherethe noise can be neglected and thus focus fully on the interplay between network intensities. This scenario can either be achieved byhigh BS intensity (per street) or by dense streets deployment. Based on the coverage probability given in (<ref>) and (<ref>), after neglecting the noise term and changing variables x =λ_B u^-1/α_L, the expression for the coverage probability becomes P_c(T) = ∫_0^∞exp(-(β_2+ γ_T)x)exp(-(β_3+ γ_C)x^α_L/α_N)×(γ_T+γ_Cα_L/α_Nx^α_L/α_N-1)dx, where the parameters β_2, β_3, γ_T and γ_C are provided in Section <ref>. Under the Poisson models for BSs and the Manhattan distance pathloss model, oneobservation from (<ref>) is that the coverage probability is independent of the BS intensity.On one hand, when both street and BSs intensities grow large, it is intuitive that with ultra dense deployment of BSs, i.e., λ_B→∞, both the associated link gain and interference become large. And thereforetheir effects on the coverage probability cancel out, which leads to an asymptotic value of the coverage probability. On the other hand, when only the street intensity itself grows large, the scenario also becomes interference limited. In this case, the coverage probability is still a constant, however densely the BSs are deployed. This reveals an important insight that when street intensity grows large, the increase of coverage probability by deploying denser BSs is less significant. We plot Fig. <ref> to demonstrate the above two observations in an ultra-dense network where intensity of BS grows large. First, it is shown that fromapproximately λ_B = 0.05 (average BS spacing of 20m) for different street intensities, the coverage probability starts to converge to the asymptotic value. Second, with denser street distribution (e.g.,λ_S= 0.1, red curve), the increase of coverage probability is less prominent. Also, denser street distribution leads to lower asymptotic coverage probability. §.§.§ Scaling law with street intensity In the last section, we demonstrated the impact of different city streets (with different intensities) on the coverage probability enhancement. Next, we reveal the scaling laws between the coverage probability and the urban street intensity. One important thing to note is that in the dense street case, the street intensity λ_S is not arbitrarily large, where the most dense streets might have at least 20m average spacing between them, with λ_S = 0.05. We provide the following proposition to quantify how the coverage probability changes under different street intensities and prove it herein. 1) When the BS intensity λ_B is large, the coverage probability decreases linearly with the street intensity λ_S. 2) When λ_B is small, the coverage probability increases linearly with λ_S. In terms of the linear scaling law and its dependence on the BS intensity, we provide the following steps of the proof: Linear scaling law First, from (<ref>) – (<ref>), the coverage probability can be rewritten as P_c(T) =P_1 + P_2, where P_1= ∫_0^∞exp(-β_1 u^-1)exp(-(β_2+γ_T)λ_B u^-1/α_L) ×exp(-(β_3+ γ_C)λ_B^α_L/α_Nu^-1/α_N)(λ_Bγ_T/α_Lu^-1/α_L-1)du, and P_2 = ∫_0^∞exp(-β_1 u^-1)exp(-(β_2+γ_T)λ_B u^-1/α_L) ×exp(-(β_3+ γ_C)λ_B^α_L/α_Nu^-1/α_N)(γ_C/α_Nλ_B^α_L/α_Nu^-1/α_N-1)du. We then rewrite the second part in (<ref>), using integration by parts, as P_2 = γ_C/γ_C+β_3∫_0^∞exp(-β_1 u^-1)exp(-(β_2+γ_T)λ_B u^-1/α_L)×∂[exp(-(β_3+ γ_C)λ_B^α_L/α_Nu^-1/α_N)]/∂ u =γ_C/γ_C+β_3 - γ_C/γ_C+β_3∫_0^∞exp(-(β_3+ γ_C)λ_B^α_L/α_Nu^-1/α_N)×∂[exp(-β_1 u^-1)exp(-(β_2+γ_1)λ_B u^-1/α_L)]/∂ u. In both (<ref>) and (<ref>), only β_3 = ζ_1 λ_S, andγ_C = ζ_2 λ_S depend on λ_S. Further, β_3 scales linearly withγ_C, which itself is small due to the terms λ_S and c^1/α_N. Then, by applying a first-order Taylor approximation exp(-x)≈ 1-x to exp(-(β_3+ γ_C)λ_B^α_L/α_Nu^-1/α_N) ≈ 1-λ_s(ζ_1+ ζ_2)λ_B^α_L/α_Nu^-1/α_N in (<ref>) and (<ref>), we can see P_1 and P_2 scale linearly with λ_S, hence proving the linear scaling law of coverage probability with λ_S. Fig. <ref> compares the exact coverage probability in (<ref>) and that with the Taylorapproximation. It is shown that under different street intensities λ_S = 0.001, 0.01, 0.02, the exact results match well with the Taylor approximations. This verifies the accuracy of using Taylor approximation to prove the linear scaling law. Another observation here is that when the street density is relatively small, e.g., λ_S= 0.001, the coverage probability is insensitive to the NLOS pathloss exponent α_N, since the coverage almost remains a constant with α_N ranging from 3 to 10. When streets become dense, the coverage probability decreases faster with growing α_N.This is consistent with the fact that α_N only affects pathloss of the NLOS links, and NLOS BS is negligible in either association or interference. Dependence on BS intensity To demonstrate the different scaling laws of coverage probability with BS intensities, we segregate the components in (<ref>) which are dependent on λ_S in the integral, and define it as Υ(λ_S), which is Υ(λ_S)= exp(-λ_S(ζ_1 + ζ_2)λ_B^α_L/α_Nu^-1/α_N)×(λ_Bγ_T/α_Lu^-1/α_L-1+λ_sζ_2λ_B^α_L/α_N/α_Nu^-1/α_N-1), the derivative of which is Υ'(λ_S) = λ_B^α_L/α_N/α_Nu^-1/α_N-1exp(-λ_S(ζ_1 + ζ_2)λ_B^α_L/α_Nu^-1/α_N)×(ζ_2 - (ζ_1 + ζ_2)α_N[γ_Tλ_B/α_Lu^-1/α_L + λ_Sζ_2 λ_B^α_L/α_N/α_Nu^-1/α_N]). Since the exponential part from (<ref>) is always positive, and ζ_2 and ζ_1 are independent of λ_B, there exists a threshold λ_B^*, which satisfies γ_Tλ^*_B/α_Lu^-1/α_L + λ_Sζ_2 λ^*_B^α_L/α_N/α_Nu^-1/α_N = ζ_2/(ζ_1+ζ_2)α_N. Hence, when λ_B>λ_B^*, Υ'(λ_S)<0, which indicates that when BS intensity is larger, coverage probability decreases with λ_S. Further, when λ_B<λ_B^*, denser streets lead to a higher coverage probability. Fig. <ref> illustrates the linear scaling of the coverage probability with the intensity of streets λ_S. It first can be observed that the coverage probability scales linearly with the intensity of streets, and the coverage probability increases with λ_S while it decreases withcorner loss Δ, when the BS intensity is relatively small λ_B = 0.005. Also, the coverage probability decreases with λ_S with large BS intensity λ_B = 0.01, while it increases with corner loss in the meantime. This implies that when the BS deployment is dense, interference becomes dominant and larger corner loss reduces the interference; when BSs are relatively sparse, small corner loss strengthens the signal from the cross BSs, thus making the associated link gainstronger and enhancing the coverage probability. Also, it can be observed that when the corner loss becomes small (e.g., the no shadowing loss case Δ = 0dB), the coverage probability becomes more sensitive to the change of street intensities, which is shown by a larger slope of the curve of coverage probability. This is because the smaller corner loss makes the cross BS interference more prominent, thus increasing the sensitivity of coverage probability to the street intensities. From the above analysis, the microcellular network does not work efficiently in a scenario where both BS and street intensities are large. When the BSs are sparsely deployed in an urban landscape with increasing street intensities (i.e., where blocks are small), then a typical UE is more likely to be associated with a BS on cross streets, and also can have a larger associated BS link gain. When λ_B grows large, however, the system becomes interference-limited, thus dense BS deployments in dense streets only contribute to more interference and lower the coverage probability. This sheds light on an import conclusion that ultra-dense BS deployment should be avoided in an urban canyon with dense street densities. §.§ Scaling law for BS association probability I don't need to change this because this is all using the parameters of γ and β. §.§ LOS probability In this section, we analyze the probability the link is LOS under the MPLP model.This result will also be applied to the calculation of pathloss in <cit.> in Section <ref> for comparison of the new Manhattan distance based model and previous Euclidean distance based pathloss models. We compare the result of the closed-form LOS probability to that in 3GPP microcellular LOS model, which shows a good match between the MPLP LOS probability and the realistic microcellular scenario in 3GPP. This justifies the accuracy of MPLP in urban street modeling. In the MPLP, the LOS probability of a propagation link from a BS at Euclidean distance d is p_LOS^MPLP(d) =1-exp(-4dλ_S)/4dλ_S. We illustrate the LOS probability analysis in Fig. <ref>. According to Slivnyak's theorem <cit.>, <cit.>, <cit.>, we first add a horizontal street S_o crossing the receiver at o. Conditioning on the distance between the BS and the receiver as d, there are in total of N_h+2 potential locations of BSs on the horizontal street and a total number of N_v potential BSs on the cross streets, at an Euclidean distance of d (which are the intersections of the streets with the circle centered at the receiver O of radius d). Because the streets are modeled as a Poisson line process in the MPLP model, N_h and N_v are Poisson random variables with N_h/2∼Poisson(2λ_Sd) and N_v/2∼Poisson(2λ_Sd). Hence, the probability of LOS link is p_LOS^MPLP(d) = 𝔼[2/2+N_h+N_v], which simplifies to p_LOS^MPLP(d) =^(a)∑_k=0^∞(4λ_Sd)^k/(1+k)k!exp(-4λ_Sd) =exp(-4λ_S)/4λ_Sd∑_k=1^∞(4λ_Sd)^k/k!=^(b)1-exp(-4dλ_S)/4dλ_S, where (a) is derived based upon the distribution of Poisson process and (b) utilizes the power series of the exponential function:exp(x) = ∑_k=0^∞x^k/k!. The LOS probablity given in the 3GPP microcellular model is <cit.> p_LOS^3GPP (d) = min{1, 18/x}(1-exp(-x/36))+ e^-x/36. First, it should be noted that the expression of LOS blockage probability under MPLP model has a similar form as that in the 3GPP microcellular model. By fitting the result to 3GPP microcellular LOS probability in (<ref>) with a minimum mean squared error regression, the street intensity is λ_S= 0.0092 (we use λ_S = 0.01 in simulations). The comparison in Fig. <ref> also shows that the LOS probability obtained in (<ref>) has a close match to that in the 3GPP model. It should be noted that even though our pathloss model is based on Manhattan distance, Euclidean distance is still a widely adopted metric in understanding urban cellular system, since it is generally easier to measure and manipulate. We use the Euclidean distance based LOS probability to evaluate the Euclidean distance based pathloss model in <cit.> and also to verify MPLP street models by comparing to 3GPP Euclidean distance based LOS models. In this section, we analyze the BS association under the Manhattan distance based pathloss model in MPLP. We start with the analysis of association probability. Given the CDF of the associated BS link gain in Section <ref>, we derive the probability the receiver is associated with a LOS BS on the typical street. The probability χ_T that the receiver is associated with a typical BS is χ_T=^(a)𝔼_u{ℙ(u_C<u| u_T = u )} = 𝔼_u_T{ℙ(u_C<u_T)}=^(b)∫_0^∞exp(-γ_Cλ_B^α_L/α_Nu^-1/α_N - γ_Tλ_B u^-1/α_L)×γ_Tλ_B/α_Lu^-1/α_L-1 du=^(c)γ_T∫_0^∞exp(-γ_C x^α_L/α_N-γ_T x)dx, where (a) is conditioned of maximum path gain of typical BSs is u, (b) is based on the CDF of the maximum path gain of typical/cross BSs, (c) follows by change of variables x = λ_B u^-1/α_L. Since the argument of the second exponential function in (<ref>) is the multiplication of λ_S and an additional attenuation of corner loss, the argument inside tends to be small. Similar to the approximation in Section <ref>, we approximate the association probability by χ_T^Approx =∫_0^∞exp(-μ)(1 - ζ_2/γ_T^α_L/α_Nλ_Sμ^α_L/α_N)dμ = 1- 2^α_L/α_N+1γ_C/γ_T^α_L/α_N[sinc(α_L/α_N)]^-1λ_S, where sinc(x) = sinπ x/π x. Because the sinc function monotonously decreases with x (0<x<1), the association probability with a typical BS decreases with α_L, while itincreases with α_N. Fig. <ref> provides the comparison of the exact association probability in (51) and the approximation result in (52). The approximation in (52) is tight when there exists corner loss Δ = 20 dB, while the gap increases when the corner loss increases. There exists a linear scaling law for the association probability with the street intensity in the scenarios with significant shadowing loss at corner, which is shown in Fig. <ref>. Also, different from α_N, which only impacts on the NLOS BS pathloss, the LOS pathloss exponent α_L is involved in both the calculation of typical/cross BS pathloss. The decrease of typical association probability with larger α_L implies that the LOS link pathloss is more sensitive to the changing pathloss exponents. Also, it is intuitive that the increase of α_N enhances the association probability since it further attenuates the transmit signal from cross street BSs. It should be noted that it is meaningful to examine the interplay between the coverage probability and these exponents values, since the pathloss exponent in reality is not fixed (we extract two reasonable parameters for the ease of analysis in this paper), but is a random variable varying from streets to streets <cit.>. The interplay of pathloss exponents and typical BS association probability provides insight into BS association behaviors under various channel conditions of different urban canyons. In addition, from (<ref>) there is a linear scaling law of the typical BS association probability with the intensity of cross streets in Fig. <ref>. Also, it should be noted that with the corner shadowing loss, even in an extremely dense street network, e.g., λ_S= 0.1, the association probability with typical BSs χ_T is still greater than 0.7. Only when in the case with no street corner loss, the association probability χ_T decreases significantly with the street intensity λ_S.The above association probability analysis illuminates another important observation that considering shadowing loss at a reasonable value, cross BSs play a minor role in BS association under the Manhattan distance based microcellular pathloss model. Similar effects on coverage probability have been demonstrated in Section <ref>. Hence, we can makethe following conclusions about the BS association. First, the BS association probability with the typical BSs is independent of the BS intensities.Second, the association probability decreases linearly with the intensity of the cross streets. Third, typical BS association is less likely when the LOS pathloss exponent α_L increases. § COMPARISON OF DIFFERENT STREET MODELS In this section, first, we fit thepathloss data of the Manhattan distance based pathloss model to different Euclidean distance based models and get the parameters for Euclidean distance pathloss models.Then we compare the coverage probability and the ergodic rate of the Manhattan distance based pathloss model adopted in this paper and the Euclidean distance pathloss models. Second, we compare the ergodic rateℛ = 𝔼{log_2(1+SINR)} under different street modelings, respectively MPLP model in this paper, the fixed grid model and realistic street data of §.§ Pathloss models comparison We compare our model with two different Euclidean distance based pathloss models. The first model computes the pathloss directly by Euclidean distance d, where PL_dB(d) = 10α̃log_10d + Δ_1, α̃ is the pathloss exponent, and Δ_1 is the offset for straight-line linear regression of the Euclidean pathloss model. In <cit.>, a key parameter for characterizing coverage in mmWave wireless networks is the distance dependent blockage probability,which has been analyzed in Section <ref>. In the urban microcell downlink scenario, we define it as p_B(d) = 1 - p_LOS(d) and apply different pathloss exponents α̃_L and α̃_N for unblocked (LOS) and blocked (NLOS) links. The pathloss is calculated by PL_dB(d) = (1-𝕀(p_B(d))) (10α̃_Llog_10d +Δ_2^L)+ 𝕀(p_B(d))(10α̃_Nlog_10d + Δ_2^N), where 𝕀(x) is the Bernoulli function with parameter x, Δ_2^L and Δ_2^N are respectively the offsets for LOS and NLOS pathloss formulas. The fitting parameters are given in Table <ref>. Based on the linear regression results in Table <ref>, we compare the coverage probability and ergodic rate of these three models in Fig. <ref> and Fig. <ref>. It is shown that the three models show significant difference in coverage probability and ergodic rate. This motivates us to do further theoretical analysis in our proposed model. §.§ Street models comparison In this section, we compare the ergodic rate under three different urban street models, the MPLP street modeling in this paper, fixed grid model (fixed spacing between streets) and realistic street deployments in Chicago, using the Manhattan distance pathloss model.The ergodic rate is defined as ℛ = 𝔼[1 + SINR]. The raw street data is obtained by OpenStreetMap powered by open source software and<cit.>, <cit.>. We extract the map data by using GIS tool QGIS <cit.>. The simulated area is a region in Chicago given inFig. <ref>, and the extracted map which includes street and node information is plotted in Fig. <ref>. The parameters of the simulation scenario under the three street models are obtained based on the map we extracted from Chicago city. We assume all the street models have the same size (1.659× 2.002 km^2). It can be counted from Fig. <ref> that the number of the vertical and horizontal streets are respectively 15 and 8 (we only count main streets which are shown explicitly in the map). Also, we assume the three models have the same (mean) street numbers. These leads to the derivation of the horizontal street density as λ_sh≈ 4.8 /km, and the vertical street density as λ_sv≈ 7.5 /km. The densities are then applied to generate two independent PPPs for horizontal and vertical streets in the MPLP model, and set the spacing between two adjacent street respectively as S_h = 133.5 m and S_v =207.4 m. The comparison of the ergodic rate under the three models is given in Fig. <ref>. From this figure, the ergodic rates are close under these different street models, which nearly coincide. The major reason for the observation is the negligible contribution of NLOS interference on the performance of Manhattan type mmWave microcellular networks from the analysis. The result, however, not only substantiates the negligibility of NLOS interference in MPLP networks, butshows that the conclusion is also applicable to fixed grid and realistic urban canyons. Therefore, MPLP is an appropriate street model in understanding Manhattan type networks, which can yield simple yet accurate results and also provide interesting insights on the scaling of performance metrics. § CONCLUSION In this paper, we proposed a mathematical framework to model a Manhattan-type microcellular network under the urban mmWave communication system by stochastic geometry.We first analyze the distribution of the path gain to the BS. We then derive an exact yet concise expression of the coverage probability. The LOS interference from the BSs on the same street as the serving BS is the dominating factor in determining the coverage probability, while BSs on cross and parallel streets have insignificant effects in most of the cases. We showed that in the ultra-dense network where intensity of BSs grows large, the network is interference-limited and the coverage probability approaches an asymptotic value.Also, the coverage probability scales linearly with the intensity of streets, and displays an interesting interplay with the BS intensity: i) when BS deployment is dense, coverage probability decreases with street intensity; ii) when BS intensity is small, the coverage probability increases with street intensity. This implies that the system does not work efficiently when both BS and street intensities are large. Therefore, there is no need to deploy many BSs in an already dense urban street environment. In addition, we showed that in most of the cases,the LOS BSs still dominate, from the perspective of both BS association, as well as coverage, unless in the case when α_N→α_L. Finally, we numerically compared the ergodic rates under MPLP, fixed spacing and a realistic street deployment in Chicago city. The ergodic rates under these street models match well, reinforcing the validity of MPLP as a realistic yet accurate urban street model in mmWave microcellular anlaysis. § PROOF OF LEMMA <REF> Since the receiver is always associated with the main lobe of the BS, which provides the smallest pathloss, the beamforming gain is always G. Hence, the CDF of the largest received power from the typical BSs is F_u_T(u)= ℙ(max_x∈Φ_T Gx^-α_L<u) = ℙ(min_x∈Φ_T x>G^1/α_Lu^-1/α_L)=^(a)exp(-2λ_BG^1/α_Lu^-1/α_L) = exp(-γ_Tλ_B u^-1/α_L), where (a) is based on the distribution of closest distance x to one fixed point of one-dimensional PPP with intensity λ, and min{x} follows an exponential distribution, with parameter, min{x}∼exp(2λ), and also follows the independent thinning rule of BSs on the typical street of BSs with main lobe pointing to the receiver. Similarly, the CDF of the largest received power from the BSs on the cross streets can be derived as follows [1] F_u_C(u)= 𝔼_Φ_C[∏^(x_C, y_C∈Φ_C)ℙ(x_C^-α_Ny_C^-α_LcG<u)]=𝔼_x_C[𝔼_y_C[∏^(x_C, y_C)∈Φ_Cℙ(x_C^-α_Nmin(y_C)^-α_LcG<u) | x_C]]=^(a)𝔼_x_C[∏^x_Cexp(-2λ_B x_C^-α_N/α_L(cG)^1/α_Lu^-1/α_L)]=^(b)exp(-2λ_S∫_0^∞ 1- exp(-2λ_B x_C^-α_N/α_L(cG)^1/α_Lu^-1/α_L) dx) = exp(-2λ_S(2λ_B)^α_L/α_N(cG)^1/α_NΓ(1-α_L/α_N)u^-1/α_N) = exp(-γ_Cλ_B^α_L/α_Nu^-1/α_N), where (a) is derived by first conditioning on x_C,and (b) is based on the probability generating functional (PGFL) of PPP. Here, we provide an approximation of the CDF result of the associated BS link gain, based on the assumption that the strongest path is always via the cross street closest to the receiver, in Section <ref>. The CDF can be derived as F_u_P(u)≈ℙ(⋂^(x_P,y_P, z_P )∈Φ_P x_P^-α_Ny_P^-α_Nz_P^-α_Lc^2G<u) =^(a)𝔼_x_P, y_P[∏^x_P∏^y_Pexp(-2λ_B G^1/α_L u^-1/α_Lc^2/α_Lx_P^-α_N/α_Ly_P^-α_N/α_L)]=^(b)𝔼_x_P[∏^x_P𝔼_y_P[∏^y_Pexp(-2λ_B( Gc^2/ux_P^α_Ny_P^α_N)d^1/α_L)| x_P]]=^(c)𝔼_x_P[exp(-2λ_S(Gc^2(2λ_B)^α_L/u)^1/α_NΓ(1-α_L/α_N)/x_P)] = ∫_0^∞ 2λ_Sexp(-γ_Pλ_B^α_L/α_Nu^-1/α_N x^-1 - 2λ_Sx)dx=^(c) 2√(2γ_Pλ_Sλ_B^α_L/α_N u^-1/α_N)K_1(2√(2γ_Pλ_Sλ_B^α_L/α_Nu^-1/α_N)), where ⋃ denotes the intersection of all of the events defined in the set (x_P,y_P, z_P )∈Φ_P, (a) is derived conditioned on x_P, y_P and (b) is derived conditioned on x_P, (c) is based on the PGFL function, (d) follows the equation <cit.>, ∫_0^∞exp(-β/4x - γ x)dx = √(β/γ)K_1(√(βγ)). By simple calculations, we can conclude the proof. § PROOF OF PROPOSITION <REF> The derivation of the LT of the interference coming from the BSs on the parallel streets is similar. Based on the proof in Proposition <ref>, the LT of the interference can be lower bounded by the LT assuming all the interfering beamforming gain is G, which is formulated as ℒ_I_ϕ_P (Tu^-1)⪆𝔼_ϕ_P[exp(-∑_(x_P,y_P, z_P )∈Φ_P Tu^-1hx_P^-α_Ny_P^-α_Nz_P^-α_Lc^2G)] =^(a)𝔼_x_P{∏^y_Pexp(-2λ_BG^1/α_Lϱ(T)u^-1/α_Lc^2/α_L(xy)^-α_N/α_L)}=^(b)𝔼_x_P{exp(-γ_Pϱ(T)^α_L/α_Nλ_B^α_L/α_Nu^-1/α_N x_P^-1)} = ∫_0^∞ 2λ_Sexp(-γ_Pϱ(T)^α_L/α_Nλ_B^α_L/α_Nu^-1/α_N x^-1 - 2λ_S x)dx =2√(2γ_Pλ_Sλ_B^α_L/α_N(ϱ(T))^α_L/α_N u^-1/α_N)× K_1(2√(2γ_Pλ_Sλ_B^α_L/α_N(ϱ(T))^α_L/α_Nu^-1/α_N)), where (a) and (b) follow the standard procedures in analysis of stochastic geometry and are similar to the proof of Laplace transform of I_ϕ_T and I_ϕ_C above in Appendix B. § PROOF OF THEOREM 1 We respectively give the LT of the three kinds of interferers ϕ_T, ϕ_C and ϕ_P. The LT of the typical BS interference ℒ^G_I_ϕ_T(s) with beamforming gain as G can begiven by ℒ^G_I_ϕ_T(s) = 𝔼[exp(-sG∑_x_T∈Φ_T hx_T^-α_L)] =exp(-2λ_Bp_T∫_(u/G)^-1/α_L^∞𝔼(1-exp(-sGhx_T^-α_L))) = exp(-2λ_Bp_T∫_(u/G)^-1/α_L^∞1/1+s^-1G^-1x_T^α_Ldx_T). For the interference with beamforming gain as g, the LT ℒ^g_I_ϕ_T(s)can be derived as ℒ^g_I_ϕ_T(s) =𝔼[exp(-sg∑_x_T∈Φ_T hx_T^-α_L)] = exp(-2λ_B(1-p_T) ∫_(u/G)^-1/α_L^∞𝔼(1-exp(-sghx_T^-α_L))) = exp(∫_(u/G)^-1/α_L^∞-2λ_B(1-p_T)/1+(Tg/G)^-1uG^-1x_T^α_Ldx_T).By applying change of variables to (<ref>) and (<ref>), and combining the results above, the LT of the interference on the typical street can be formulated as ℒ_I_ϕ_T(s) = ℒ^G_I_ϕ_T(s) ℒ^g_I_ϕ_T(s)=exp(-γ_Tλ_Bu^-1/α_L∫_1^∞1/1+T^-1μ^α_Ldμ) = exp(-β_2 λ_Bu^-1/α_L). Similarly, the LT of the cross interfering with beamforming gain G follows the the proof in Appendix A and proof of ℒ^G_I_ϕ_T(s), which can be given by ℒ^G_I_ϕ_C(s) =𝔼[exp(-∑_(x_C, y_C)∈Φ_Csh x_C^-α_N y_C^-α_L c G )]=𝔼[∏_x_C𝔼[∏_y_cexp(-shx_C^-α_Ny_C^-α_LcG)|x_C]] = 𝔼[∏_x_Cexp(-2λ_Bp_T (cG)^1/α_Lx^-α_N/α_Lϱ(T))] = exp(-2λ_S (2λ_B p_T)^α_L/α_N(cGϱ(T)^α_L/u)^1/α_NΓ(1-α_L/α_N)).Combining the LT of the cross interference with beamforming gain g, the LT of the cross interference ℒ_I_ϕ_C(u) derived accordingly.ℒ_I_ϕ_C(u) = exp(-2^1+α_L/α_Nλ_S (cG)^1/α_NΓ(1-α_L/α_N)ελ_B^α_L/α_Nu^-1/α_N) = exp(-γ_Cελ_B^α_L/α_Nu^-1/α_N) = exp(-β_2 λ_B^α_L/α_Nu^-1/α_N),where ε was defined in (<ref>). ieeetr
http://arxiv.org/abs/1702.08122v3
{ "authors": [ "Yuyang Wang", "Kiran Venugopal", "Andreas F. Molisch", "Robert W. Heath Jr" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170227011237", "title": "MmWave vehicle-to-infrastructure communication: Analysis of urban microcellular networks" }
Zubiaga et al.: Political Homophily in Independence Movements: Analysing and Classifying Social Media Users by National IdentitySocial media and data mining are increasingly being used to analyse political and societal issues. Here we undertake the classification of social media users as supporting or opposing ongoing independence movements in their territories. Independence movements occur in territories whose citizens have conflicting national identities; users with opposing national identities will then support or oppose the sense of being part of an independent nation that differs from the officially recognised country. We describe a methodology that relies on users' self-reported location to build large-scale datasets for three territories – Catalonia, the Basque Country and Scotland. An analysis of these datasets shows that homophily plays an important role in determining who people connect with, as users predominantly choose to follow and interact with others from the same national identity. We show that a classifier relying on users' follow networks can achieve accurate, language-independent classification performances ranging from 85% to 97% for the three territories.social media, national identity, socio-demographics, classification.Political Homophily in Independence Movements: Analysing and Classifying Social Media Users by National Identity Arkaitz Zubiaga, Bo Wang, Maria Liakata, and Rob Procter A. Zubiaga, B. Wang, M. Liakata and R. Procter are with the Department of Computer Science, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom. B. Wang, M. Liakata and R. Procter are also with the Alan Turing Institute, 96 Euston Rd, Kings Cross, London NW1 2DB, United Kingdom.E-mail: see http://www.zubiaga.org/Received date / Accepted date ============================================================================================================================================================================================================================================================================================================================================================================================================================================empty§ INTRODUCTION Social media are an increasingly important source for data mining applications, among others for exploratory research utilised as a means to analyse political and societal issues. One problem with social media is the limited availability of users' socio-demographic details that would enable analysis of the many different realities in society. Attempting to mitigate this issue, a growing body of research deals with the automated inference of socio-demographic characteristics such as age and gender <cit.>, country of origin <cit.> or political orientation <cit.>.Following this line of research, we describe and assess a data collection methodology that enables identifying two groups of social media users in territories with active independence movements: those who support the independence (pro-independence), and those who oppose it (anti-independence). Independence movements are motivated by conflicting national identities, where different parts of a population identify themselves as citizens of one nation or another, such as the Scots feeling Scottish (pro-independence) or British (anti-independence). These situations lead to people with conflicting national identities living together in the same territory, where national identity can be defined as “a body of people who feel that they are a nation” <cit.>.Our study makes the following novel contributions: (1) we describe a methodology that relies on Twitter users' self-reported location for collecting users with conflicting national identities, as opposed to the largely studied partisanship or voting intention of users, (2) we perform a quantitative analysis focusing on the network and interactions within and across national identities, and (3) we study language-independent classification approaches using four different types of features. Our semi-automated data collection and annotation methodology enables us to collect datasets for three territories –Catalonia, the Basque Country and Scotland– with over 36,000 users. Our experiments show that the users' network can achieve highly accurate classification, outperforming the use of tweet content. An analysis of the user groups highlights the influence of political homophily in independence movements, where users predominantly form ties on the basis of their ideology, following and interacting with others that think alike.§ RELATED WORKComputational approaches to the study of independence movements are scarce. The most relevant work to that which we report here is by Fang et al. <cit.> attempting to classify users' voting intention in the 2014 Scottish independence referendum. However, their work focused on determining voting intention during a particular referendum rather than determining the users' national identity and, being limited to a single territory – Scotland –, they introduced a language-dependent approach that identifies topics discussed during the referendum campaign for determining users' stance. While classification of users by political orientation is also related to our work, such as republicans or democrats in the US <cit.>, or conservatives or labourists in the UK <cit.>, national identities reflect independent dimensions that are not necessarily linked to partisanship. Citizens with common national identities can also vote for parties with different political ideologies, and their national identities can be instead motivated by cultural and linguistic backgrounds <cit.>. Similarly, there has been research in predicting the outcome of political elections <cit.>, but this line of research again looks at the voting intention of users rather than their national identity.Previous research has suggested that political homophily is also reflected in social media <cit.>, that is that supporters of one political party are more likely to follow one another than to follow supporters of other parties. Whether this generalises to users with different national identities has not been explored before.§ DATA COLLECTIONOur data collection methodology relies on users' self-reported location as a proxy for identifying the territory that users claim to be citizens of, which is directly indicative of their stance towards the ongoing independence movement in their territory. For each territory, we identify distinctive location names with which either pro-independence or anti-independence people associate themselves, which gives us ground truth labels:Catalonia. Citizens of Catalonia can feel either Catalan (pro-independence) or Spanish (anti-independence). For the generation of the dataset distinguishing these two national identities, we rely on the fact that Catalans whose profile location contains Països Catalans or its acronym PPCC (i.e. Catalan Countries) are overtly claiming to be citizens of an independent Catalonia. The term Països Catalans unambiguously refers to an independent Catalonia, which would instead be Catalunya or Cataluña if not explicitly referring to an independent country. Alternatively, we identify users whose location contains the name of a Catalan city (e.g. Barcelona or Girona) or Catalunya/Cataluña along with Espanya or España as claiming to be Spanish citizens. Using a dataset of 12 months' worth of tweets collected from the Twitter streaming API between March 2015 and February 2016, we sampled users that satisfied the above characteristics.Basque Country. Citizens of the Basque Country can feel either Basque (pro-independence) or Spanish (anti-independence). To generate the dataset, we look for users whose profile location contains Euskal Herria or its acronym EH (i.e. Greater Basque Country). The term Euskal Herria unambiguously refers to an independent Basque Country, unlike Euskadi which refers to a region of Spain. On the other hand, we look for users whose location field contains the name of a Basque city (e.g. Bilbao or Donostia/San Sebastián) or Euskadi along with Espainia or España, which identifies users located in the Basque Country who claim to be citizens of Spain. We use the same 12 month dataset to look for users that satisfy these characteristics.Scotland. Officially part of the UK, Scotland also has an ongoing independence movement. The dataset generation process for Scotland needs to be slightly different from the two above, as the Scots do not use a different name to refer to an independent Scotland. To overcome this, we first use a Twitter dataset pertaining to the 2014 Scottish independence referendum, collected between 1st August and 30th September, 2014 using a list of keywords including `#IndyRef', `vote' and `referendum'. In this dataset, we look for supporters who tweeted one of #YesBecause, #YesScotland, #YesScot, #VoteYes and opposers who tweeted one of #NoBecause, #BetterTogether, #VoteNo, #NoThanks, as suggested by <cit.>. To make sure that we identify the users' stance towards Scotland's independence, avoiding noise from tweets that are not necessarily endorsements of the hashtag being used, we collected the profile metadata of all sampled users. To generate the final dataset, we used again the same 12 month dataset, from which we retained the profiles of all IndyRef supporters whose profile location contained Scotland but not UK, United Kingdom, GB or Great Britain, as well as all opposers whose profile location contained the name of a Scottish city (e.g. Glasgow or Edinburgh) or Scotland, along with UK, United Kingdom, GB or Great Britain.The location strings for the resulting user profiles were manually verified. The methodology was largely accurate, with 96.0%, 95.9% and 98.9% correct instances for Catalonia, the Basque Country and Scotland, respectively. Those users that did not meet our expected locations were manually removed from the datasets. The resulting datasets consist of 36,609 users (see Table <ref>). §.§ User Data CollectionFor each user in our dataset, we collect three different types of data: (1) the user's 500 most recent tweets, (2) the 500 most recent tweets favourited by the user, and (3) the list of users that the user follows and is followed by. The final collection comprises 27.4 million tweets including timelines and favourites, as well as 19.1 million different users occurring in follow networks.§ ANALYSIS OF NATIONAL IDENTITY GROUPSTo begin with the analysis of different national identities, in Figure <ref> we look at the interactions and network features by visualising connections within and across national identities. A look at the interactions shows a confusing picture where users of different national identities seem to occasionally interact with each other. However, when we look at the network visualisations, we see a totally different picture where users are mainly connected to others of the same national identity, with a clear separation between national identities, especially for Catalonia and the Basque Country.To quantify this, we compute the assortativity of these six networks, which is in turn indicative of the existence or not of political homophily <cit.>, i.e. the users' preference to connect to and interact with those of the same ideology. Table <ref> shows assortativity values for these six networks, along with the analysis of their statistical significance using Mann–Whitney U tests <cit.>. All six networks achieve positive assortativity scores indicating statistically significant and positive correlation. These scores are however lower for Scotland, especially in terms of interactions; this suggests that users in Scotland are more likely to follow and interact with each other than in the other two territories, however they still show a preference to follow those who think like them. Separation between communities is much more prominent in the Basque Country and Catalonia, where connections between users who think alike are much more prevalent with assortativity scores above 0.6.To understand behavioural patterns that characterise the different national identity groups, we perform pairwise comparisons using Welch's t-test <cit.>. Having two different user groups in each case (pro-independence and anti-independence), Welch's t-test enables us to determine which of the groups is more prominent for a certain feature as well as the statistical significance of that prominence. The results of this analysis are shown in Table <ref>, with a set of 30 features grouped into 5 types.Regarding the tweeting activity of users, we observe that there is no consistent pattern as to who tweets more, has older accounts or gets more retweeted (#1 to #6), with pro-independence users being more active in Catalonia, anti-independence users being more active in Scotland and both users being more active in terms of different aspects in the Basque Country. What is interesting is to look at the URLs that these users post in their tweets (#7 and #8). We see that pro-independence users tend to post more URLs whose domain belongs to their nation (i.e., .cat for Catalonia, .eus for the Basque Country or .scot for Scotland), whereas anti-independence users tend to post more URLs whose domain belongs to the officially recognised country (.es for Spain and .uk for the UK). This finding is statistically significant for Catalonia and the Basque Country, but not for Scotland.Looking at the user profiles, we see that pro-independence users tend to have more followers while anti-independence users tend to follow more people in the Basque Country and Scotland, however it is the pro-independence users who have both more followers and follow more people in the case of Catalonia (#9 and #10). There is no significant difference when we look at whether users from both groups are verified accounts or not (#11). The users who have the geolocation feature enabled in their accounts tend to be pro-independence in Catalonia and the Basque Country, and anti-independence in Scotland (#12). Initially we hypothesised that pro-independence users would be less likely to activate the geolocation feature, given that in that case Twitter would tag their geolocated tweets as coming from Spain or the UK, which they might dislike. However, this only holds true for Scotland and hence the users might not be concerned and/or aware of this.We also look at the URL specified in the user profiles as being one that belongs to the independent TLD (#13, .cat/.eus/.scot) or the officially recognised country's TLD (#14, .es/.uk). We observe significant differences here, for all three territories, showing that pro-independence users tend to use more the independent TLD, with the anti-independence users using more the official country's TLD. Finally, we look at the extent to which the users configure their accounts in the language of the independent nation (#15) or the official country's language (#16). There is a significant difference in both Catalonia and the Basque Country, with pro-independence users being more likely to set up their accounts in Catalan and Basque, respectively. This feature is not as indicative for Scotland as Twitter does not allow the option to use the service in Scottish Gaelic or Scots. Instead, our analysis looked at the use of “en-gb” as the country's official and “en” as the opposite. An analysis with one of the Scottish local languages available in the platform may lead to different results.Interaction features (#17-#24) and network features (#25-#27) show a similar tendency; in Catalonia, it is the pro-independence users who are more likely to follow and interact with both groups than the anti-independence users, whereas in the Basque Country and Scotland the pro-independence make more connections within their group and the anti-independence connect more with the opposing group than the pro-independence do. Linguistic features (#28-#30) are processed using the Polyglot Python package for language identification and sentiment analysis <cit.>. As expected, pro-independence users are more likely to use the language of their territory (Basque, Catalan, Scottish Gaelic or Scots) than the anti-independence, which shows their passion for their cultural background. Looking at the sentiment features (#29-#30), however, we do not observe a clear pattern across territories. More interestingly, a comparison of the sentiment in the interactions within and across groups shows that users tweet positively 67.7% times more often within groups than across groups (MWW = 528932618.0, p < 0.01).§ STANCE CLASSIFICATION§.§ Task DefinitionWe formulate the problem of determining the stance of users towards the independence movement in their territory as a binary, supervised classification task. Stance classification of users differs from the increasingly popular stance classification of texts <cit.> in that the stance is explicitly expressed in each text for the latter, while for users one needs to put together behavioural patterns extracted from historical features of their account. The input to the classifier is a set of users from a specific territory. To build the classification model, a training set of users labelled for one of Y = {PI, AI} is used (PI = pro-independence, AI = anti-independence). For a test set including a set of new, unseen users, the classifier will have to determine if each of the users is a supporter or opposer of independence, Ŷ = {PI, AI}. §.§ Classification SettingsWe perform the classification experiments in a stratified, 10-fold cross-validation setting separately for each territory. We micro-average the scores to aggregate the performance across different folds and report the final accuracy scores. We use four different classifiers: Naive Bayes, Support Vector Machines, Random Forests and Maximum Entropy. We use four different types of features, all of which are independent of the location string we used for determining the ground truth: * Timeline: We use Word2Vec embeddings <cit.> to represent the content of a user's timeline of most recent tweets. The model we use for the embeddings was trained for each territory using the entire collection of tweets. We represent each tweet as the average of the embeddings for each word, and finally get the average of all tweets. * Interactions: We consider that a user is interacting with another when they are retweeting or replying to them. We create a weighted list of all the users that are the target of the interactions in each of our datasets. Given the length of this list, we reduce its size by restricting to the 99th percentile of most common interactions. Each of the remaining users belong to a feature in the resulting vectors. For each user, we represent each of the features in the vectors as the count of interactions the user has had with the user represented by that feature. * Favourites: To represent the content of the tweets favourited by a user, we use the same approach based on word embeddings as for the timeline above, in this case using the content of the tweets favourited by a user instead. * Network: Similar to the approach used for interactions, we aggregate the list of users that appear in the networks (followees or followers) in each of our datasets. We restrict this list to the 99th percentile formed by the most frequent users in each dataset. For each user, we then create a vector with binary values representing whether each of the users is in the network of the current user. § RESULTSTable <ref> shows the classification results. Among the four feature types under study, a user's network is the most indicative feature for determining their stance. This suggests that users belonging to different identity groups tend to be connected to different users on Twitter. The rest of the features are significantly behind the performance of network features, suggesting that the content they engage with and the people they interact with are not as indicative.Among the classifiers under study, we find that the Maximum Entropy classifier performs better than the rest when network features are used. This is consistent for all three territories, achieving 0.972, 0.903 and 0.849 for Catalonia, the Basque Country and Scotland, respectively.§ DISCUSSIONThe methodology described here enabled us to gather large datasets to analyse independence movements through social media, developing a classifier that can determine the users' national identity. Our methodology and classifier have been tested in three territories with ongoing independence movements: Scotland, Catalonia and the Basque Country. Our classification experiments show encouraging results with high performance scores that range from 85% to 97% in accuracy with the use of a Maximum Entropy classifier that exploits each user's social network. Moreover, an analysis of the social networks of users reveals the existence of political homophily, where users tend to connect with others from the same group or national identity. Further to this experimentation and in a realistic scenario, the classifier trained from users whose self-reported location field reveals their national identity can then be applied to other users in that particular territory. Classification of users by national identity can then be exploited for further analysis of societal and political issues, as well as to target the segment of users according to one's interest.Our plans for future work include further experimenting our data collection and annotation approach to other territories such as Palestine or Kurdistan.§ ACKNOWLEDGMENTS This work has been supported by the PHEME FP7 project (grant No. 611233) and The Alan Turing Institute under the EPSRC grant EP/N510129/1.IEEEtran [ < g r a p h i c s > ]Dr. Arkaitz Zubiaga Arkaitz Zubiaga is an assistant professor at the University of Warwick. His research interests revolve around social media mining, natural language processing, computational social science and human-computer interaction. [ < g r a p h i c s > ]Bo Wang Bo Wang is a PhD student at the University of Warwick, under the supervision of Dr. Maria Liakata and Prof. Rob Procter. His research interests lie in social media mining, natural language processing, sentiment analysis and automatic text summarisation.[ < g r a p h i c s > ]Dr. Maria Liakata Maria Liakata is an associate professor at the University of Warwick and a Turing Fellow at the Alan Turing Institute. Her research interests lie in text mining, natural language processing, biomedical text mining and sentiment analysis. [ < g r a p h i c s > ]Prof. Rob Procter Rob Procter is a Professor at the University of Warwick and a Turing Fellow at the Alan Turing Institute. His research interests lie in social media analysis, computational social science, natural language processing and mixed methods.
http://arxiv.org/abs/1702.08388v3
{ "authors": [ "Arkaitz Zubiaga", "Bo Wang", "Maria Liakata", "Rob Procter" ], "categories": [ "cs.CL", "cs.SI" ], "primary_category": "cs.CL", "published": "20170227171903", "title": "Political Homophily in Independence Movements: Analysing and Classifying Social Media Users by National Identity" }
Analysis of Low Excitation HDO Transitions Toward the High-Mass Star-forming Regions G34.26+0.15, W51e_1/e_2, and W49N Magda  Kulczak-Jastrzȩbska Received: date / Accepted: date ==================================================================================================================================================A number of recent estimates of the total luminosities of galaxies in the SDSS are significantly larger than those reported by the SDSS pipeline.This is because of a combination of three effects:one is simply a matter of defining the scale out to which one integrates the fit when defining the total luminosity, and amounts on average to ≤ 0.1 mags even for the most luminous galaxies.The other two are less trivial and tend to be larger; they are due to differences in how the background sky is estimated and what model is fit to the surface brightness profile. We show that PyMorph sky estimates are fainter than those of the SDSS DR7 or DR9 pipelines, but are in excellent agreement with the estimates of Blanton et al. (2011).Using the SDSS sky biases luminosities by more than a few tenths of a magnitude for objects with half-light radii ≥ 7 arcseconds.In the SDSS main galaxy sample these are typically luminous galaxies, so they are not necessarily nearby.This bias becomes worse when allowing the model more freedom to fit the surface brightness profile. When PyMorph sky values are used, then two component Sersic-Exponential fits to E+S0s return more light than single component deVaucouleurs fits (up to ∼ 0.2 mag), but less light than single Sersic fits (0.1 mag). Finally, we show that PyMorph fits of Meert et al. (2015) to DR7 data remain valid for DR9 images.Our findings show that, especially at large luminosities, these PyMorph estimates should be preferred to the SDSS pipeline values. galaxies: fundamental parameters – galaxies: photometry – galaxies: structure § INTRODUCTIONThere is substantial interest in quantifying the luminosity and stellar mass functions in the local universe (Bernardi et al. 2017a and references therein).The Sloan Digital Sky Survey (hereafter SDSS), which surveyed about a quarter of the sky to a median redshift of about z∼ 0.1, is the benchmark database for such studies.Recently Meert et al. (2015, 2016) have made available a re-analysis of the galaxies in the SDSS DR7 release (Abazajian et al. 2009).Their analysis determines photometric parameters, such as luminosity, half-light radius, a measure of the steepness or central concentration of the profile, etc., by fitting a number of different models to the surface brightness profile:a single component deVaucouleurs profile, a single component Sersic profile, and a two component Sersic bulge plus exponential disk profile (hereafter deV, Ser and SerExp).The fitting algorithm is called PyMorph (Vikram et al. 2010; Meert et al. 2013, 2015, 2016; Bernardi et al. 2014).The PyMorph catalog yields substantially more light at high luminosities (Bernardi et al. 2013, 2016a,b, and Figure <ref> below) than previous work based on SDSS pipeline photometry.The differences impact Halo Model (Cooray & Sheth 2002) based interpretations of the relationship between galaxies and dark matter halos at z∼ 0.1 (e.g. Shankar et al. 2014).Pinning down this relationship locally is crucial for studies of how this relationship evolves. In addition, as first identified by Bernardi et al. (2011) at the high mass (luminosity) end there is a special mass (luminosity) scale: 2× 10^11M_⊙ (which corresponds to an r-band luminosity scale of ∼ -22.5 mag). Various scaling relations change slope at this scale, and this is thought to be related to a change in the assembly histories – e.g. minor versus major dry mergers. It is also the mass (or luminosity) scale where the stellar mass (or luminosity) function starts to drop exponentially. For all these reasons, identifying and accounting for all possible biases so as to have reliable photometric estimates at these luminosity and mass scales is important.Here we address the reasons for the differences between PyMorph and the SDSS, and show that PyMorph should be used, especially at large luminosities. There are expected to be three main culprits.An important step in the determination of the amount of light we receive from an object is the estimation of the amount of light which is contributed by the background sky.Over-estimating the contribution from the sky will lead to an underestimate of the size and total light, and perhaps a decrease in the estimate of how centrally concentrated the object is.Bernardi et al. (2007) (see also, e.g., SDSS DR7 documentation) noted that the SDSS pipeline reductions underestimated the sky, especially in crowded fields.In the years since, the SDSS has revised its pipelines (see the DR9, Ahn et al. 2012, and subsequent data releases).In addition, a number of other analyses have also provided improved estimates (Simard et al. 2011, Blanton et al. 2011, Meert et al. 2015, 2016).One of the main goals of the present work is to compare different estimates of the sky in the SDSS footprint, and to quantify the impact this has on the estimated sizes, shapes and luminosities of galaxies.Blanton et al. (2011) argue that the SDSS values can be biased by as much as a magnitude for nearby objects with large angular size (half-light radius ≥ 40 arcseconds).However, because the bias is really associated with having a large angular size, the bias can still be significant (a few tenths of a magnitude) for large objects (half-light radius ≥ 7 arcseconds) whether or not they are nearby.There is a tight correlation between luminosity and physical size, so even though the majority of luminous galaxies in the SDSS main galaxy sample tend to be more distant (z ∼ 0.2) they still have relatively large angular sizes (≥ 7 arcseconds). In addition to the sky, two other effects contribute to differences between SDSS pipeline and more recent estimates of galaxy luminosities and sizes.One is trivial:when reporting the total light in an image, the SDSS only integrates the surface brightness profile out to about ∼ 7× the half-light radius.Others, such as PyMorph (Meert et al. 2013), do not truncate.This amounts to a small systematic difference of order 0.05 mags for deV profiles, but can be larger for Ser profiles (e.g. Kelvin et al. 2012).The second effect is more interesting:it is the fact that the luminosity and size estimates depend on the model which is fitted to the image.In what follows, we will be careful to distinguish between these three effects.E.g., it is not obvious if models which have more freedom to better fit the image will end up predicting more light or less.There is another potential observational systematic:the deblending of overlapping galaxies.However, this is resolved in Meert et al. (2015), who discuss how PyMorph handles nearby neighbours, as well as polluted fits (those that could not be deblended).Their rate of occurence is sub-percent, and PyMorph provides a flag identifying them, so it was simple to exclude them from the analysis which follows.The present study is timely because the Meert et al. analyses are based on SDSS DR7 images.However, significant changes to the SDSS imaging pipeline were implemented in DR9, and remain in place in subsequent data releases.These are described on the SDSS website: www.sdss.org.Therefore, after defining the sample we work with in Section <ref>, our first step is to compare PyMorph analyses of the DR7 and DR9 images.This is the subject of Section <ref>.Section <ref> and Section <ref> quantify the effects of truncation.Section <ref> also highlights the fact that, because the most massive objects may be a different population having different profile shapes it is important to specify the choice of regression, i.e. the average magnitude difference may depend on the luminosity being used as x-axis.Section <ref> compares sky estimates from the SDSS DR7 and DR9 pipelines with determinations from Blanton et al. (2011), Simard et al. (2011), and Meert et al. (2015, 2016) (hereafter B11, Simard11 and PyMorph DR7, respectively).Section <ref> shows how the choice of model to fit affects the estimated total light.A final section summarizes.When necessary, we assume a spatially flat background cosmology with parameters (Ω_m,Ω_Λ)=(0.3,0.7), and a Hubble constant at the present time of H_0=70 km s^-1Mpc^-1.§ COMPARISON OF SDSS AND PYMORPHThe analysis which follows is based on the SDSS DR7 and DR9 Main Galaxy samples.For these galaxies, the SDSS provides a number of photometric parameters on its website:www.sdss.org.We are most interested in the total magnitudes and half-light radii, the best SDSS pipeline estimates of which are based on fitting exponential or deVaucouleurs profiles to the sky subtracted image.Model magnitudes simply choose the better of the two fits, whereas cModel magnitudes use a linear combination of the two best fits (a χ^2-like goodness of fit metric is minimized to set the relative amplitudes of the components).Thus, although they are the result of fitting two profile shapes, cModel magnitudes are not really two-component fits.In contrast to the SDSS cModel photometry, the best PyMorph SerExp photometry is based on true two-component fits – a Sersic bulge with an exponential second component – in which the sky, assumed to be constant across the image, is also fit simultaneously (e.g. Meert et al. 2015).These fits were made using the DR7 release. §.§ MotivationWe begin with a comparison of what are considered to be the best SDSS and PyMorph photometry:cModel and SerExp magnitudes.Figure <ref> shows that the two are in good agreement, except at the bright end, where PyMorph is substantially brighter.The bottom panel shows the result of replacing cModel with Model magnitudes.Except for an offset at low and intermediate luminosities, both panels show similar trends.The similarity observed at the bright end is expected because the vast majority of the most luminous galaxies are E+S0s, so Model = deV and cModel ≈ deV.The main goal of the present study is to determine which of the three culprits mentioned in the Introduction are responsible for the offsets in Figure <ref>.In particular, it may be that the agreement between cModel and SerExp at faint and intermediate luminosities is fortuitous. Figure <ref> was made using DR7 galaxies.However, between DR7 and DR9, a number of parts of the SDSS pipeline were changed.The most important change is the SDSS sky estimate, but how flux calibration is done, and so on, also changed (see Aihara et al. 2011 and Ahn et al. 2012 for details).Therefore, our first step is to determine if the changes from DR7 to DR9 matter.Figure <ref> shows a similar comparison as in Figure <ref>, but now using DR9 values.To make this figure we ran PyMorph on a subset of 10^4 DR9 galaxies.The chosen objects are the same as those used by Meert et al. (2013) when developing and testing PyMorph. The distribution of the measured parameters of this subset reproduces the distribution of all the observed galaxies in the SDSS DR7 main galaxy sample (see their Figure 1). Comparison of Figure <ref> with Figure <ref> shows little difference:the discrepancy between SDSS and PyMorph which was known to exist in DR7 persists in DR9. §.§ Comparison of SDSS DR7 and DR9We now consider if the best fitting PyMorph parameters have changed between DR7 and DR9.Since PyMorph fits for the sky itself – it does not use the SDSS value – we expect the change to the SDSS sky estimate to have little impact on the PyMorph fits.Figure <ref> shows that this is indeed the case:the apparent magnitudes, sizes, and Sersic indices for PyMorph Ser fits are essentially unchanged.Figure <ref> shows that this is also true for PyMorph SerExp fits; because these are two-component fits, the bottom panel shows bulge/total ratios rather than Sersic indices. Both figures show that, although there is scatter between the DR7 and DR9 values, it is similar to the statistical uncertainty on the parameters (Meert et al. 2013). It should be noted that there is larger scatter for SerExp than for Ser, because there are more free parameters and hence more potential degeneracies. Therefore, Figures <ref> and <ref> indicate that the PyMorph parameters of Meert et al. can be used essentially without modification even for DR9.(There are, of course, other studies for which the difference between DR7 and DR9 or DR13 recalibrations do matter.) §.§ Effect of truncationIn what follows, we would like to compare the luminosity estimates of PyMorph and the SDSS.Both report values based on fitted models; however, whereas PyMorph integrates the fitted profile to infinity, the SDSS does not.If a two-dimensional Sersic profile with semi-major axis a and axis ratio b/a is truncated along a line of constant surface brightness, then L_ trunc = L_∞ γ(2n, b_n ρ_ trunc^1/n)/Γ(2n), whereρ_ trunc≡θ_ trunc/√(ab), γ(m,x) is the incomplete gamma function, γ(m,∞) = Γ(m), and b_n is defined by requiring γ(2n,b_n)=Γ(2n)/2.E.g., b_n≈ 7.669 when n=4.The ratio L_ trunc/L_∞ clearly depends on n.Notice that if θ_ trunc is a multiple of √(ab), then, at fixed n, the correction is the same for all axis-ratios. For example, in their work with the GAMA survey, Kelvin et al. (2012) set θ_ trunc = 10√(ab). For reasons which will become clear shortly, the grey solid curve in Figure <ref> shows θ_ trunc=7.5√(ab). This shows that, when n=4, the correction is 0.07 mags. Unfortunately, the SDSS truncation is more complicated:the SDSS website says that it truncates with a function which drops from unity to zero between 7 and 8× the half-light radius.However, in the database, the quantity which is called r_e is the semi-major axis a, rather than √(ab).In addition, the actual form of this truncation has never been published.As we show below, we are able to reproduce the SDSS values if we use a sharp truncation radius of 7.5a making ρ_ trunc^ SDSS≈ 7.5a/√(ab) = 7.5 √(a/b).(In particular, 7.5a works substantially better than 7.5√(ab).)Hence, at fixed n, L_ trunc/L_∞ is a monotonic function of b/a:since 0≤ b/a ≤ 1, the correction is maximal when b/a=1 and L_ trunc→ L_∞ as b/a→ 0.Thus, at fixed n, there is a range of corrections which depends on the distribution of b/a.Since our goal is to compare with the SDSS, the black solid line in Figure <ref> shows the median of 2.5log_10(L_ trunc/L_∞) as a function n, and the scatter around this median (black dashed lines), for the PyMorph Sersic reductions of SDSS E+S0 galaxies when θ_ trunc=7.5a.This shows that, when n=4, the correction is ∼ 0.05 mags, but when n=8, then the median correction is ∼ 0.16 mags. (For later type galaxies n is smaller so the correction is smaller; the blue dot shows the correction if n=1 and one truncates at 3.5× the half light radius.) In what follows, we will be careful to indicate if the reported magnitudes were based on truncation or not.However, the half-light radii we report are always those which include L_∞/2; we never use the scale associated with L_ trunc/2.§.§ Choice of regression and truncation We remarked in the Introduction that the most massive galaxies appear to be a structurally different population. So it should not be surprising if their surface brightness profiles are also different in some way.If these are objects for which SDSS and PyMorph photometry is particularly different, then plots versus PyMorph may look rather different from plots versus Model, for the same reason that, in a Gaussian mixture model, plots of y versus x can look very different from plots of x vs y.Figure <ref> shows that something like this happens in the SDSS data:The differences between Ser and Model magnitudes increase at the bright end, but they look much larger when shown as a function of Ser rather than Model magnitudes. Figure <ref> shows that the same is true of SerExp magnitudes.The differences are reduced slightly if one uses truncated Ser or SerExp magnitudes, since this reduces the analog of m in the example above, but it does not change the fact that the choice of x-axis matters.While truncation matters, the net effect of truncation is about half of what one would naively have expected from Figure <ref>. This is because the correction depends on n, but the n-L correlation is weak. Although large L have larger n, so truncation matters more for large L, there is substantial scatter around the mean n which reduces the net effect.This is also why, in practice, it matters little (≤ 0.01 mags) whether one truncates using 7.5a or 7.5√(ab). Of course, truncation matters even less for the SerExp fits. § COMPARISON OF SKY ESTIMATESOur goal is to compare PyMorph and SDSS sky estimates.However, when fitting a model to the observed galaxy image, PyMorph fits for the sky – assumed to have constant surface brightness across the image – simultaneously.Therefore, it is possible that the fitted sky varies when the model which is fitted to the galaxy surface brightness profile changes.This would make comparisons with the SDSS sky estimate depend on the fitted model.Fortunately, Figure <ref> shows that the estimated sky is essentially the same whatever the fitted model.(We have plotted versus truncated magnitudes. Of course, the y-axis legend does not specify truncated because the sky estimates do not depend on (i.e. are the same) whether or not we truncate.)This has two consequences.First, when comparing the PyMorph sky with other estimates, we do not need to specify if it is the deV sky, the Ser sky, or the SerExp sky, since, for the present purposes, they are all the same.We exploit this fact in Section <ref>.Second, the similarity in sky values indicates that differences between PyMorph models are not driven by the sky.We use this fact in Section <ref>.We are now ready to compare background sky estimates with those from PyMorph, for which we use the SerExp sky value.Figure <ref> compares background sky estimates from SDSS DR7, Simard11, and PyMorph DR7 SerExp.The PyMorph sky is faintest and SDSS brightest, with the Simard11 sky lying closer to the SDSS at the faint and intermediate luminosities and in between at the bright end.The differences from PyMorph are particularly large for objects with large angular sizes or luminosities.While it is tempting to conclude that Simard11 is the most prudent choice because it lies between the other two, Figure <ref> shows that the PyMorph sky estimate is in excellent agreement with that of B11. In contrast to the previous figure, this one uses DR9 images, for which Simard11 values are not available.The PyMorph and B11 sky values were determined in very different ways.Those of B11 are based on fitting the masked background sky for each SDSS scan with a smooth continuous function across the sky.(In Figure <ref>, we used the B11 sky value measured at the center of the galaxy image since the variation of the sky value on the scale of a galaxy is very small.)In contrast, the PyMorph sky is determined on an object-by-object basis.Therefore, the agreement between the two is nontrivial, and strongly suggests that these two estimates are to be preferred over the others.Note also that the scatter around the median is symmetric, whereas in the comparison with SDSS it is not.B11 argue that their sky estimates represent a substantial improvement over the standard SDSS catalog results and should form the basis of any analysis of nearby galaxies using the SDSS imaging data. Figure <ref> shows that, in fact, this is not restricted to nearby galaxies:E.g., for all galaxies with apparent sizes ≥ 7 arcseconds, the SDSS sky is biased (left panel) (we quantify its effect on photometric parameters in Section <ref>). In the SDSS main galaxy sample these tend to be galaxies with large luminosities (right panel) which are typically in crowded fields.The agreement between B11 and PyMorph in both panels suggests that, in contrast to the SDSS, PyMorph is unbiased for large luminous galaxies. §.§ Sky-related biases when fitting deVaucouleurs profiles to E+S0sHaving determined that the PyMorph/B11 sky is to be preferred, we now consider how the choice of sky biases the inferred parameters.We begin with a study of the only case in which a direct comparison (i.e. same model fit) with SDSS is possible:fitting a deVaucouleurs profile to images of E+S0 galaxies.For morphological type, we use the Bayesian automated classifications of Huertas-Company et al. (2011):each galaxy is assigned weights which represent the probabilities that it is Elliptical, S0, Sab or Scd. We restrict to E+S0s since a deVaucouleurs profile is known to not fit other morphological types well, and we do not wish to confound the question of sky-related biases with biases arising from fitting a bad model. The top panel of Figure <ref> shows the difference between the SDSS DR9 and PyMorph estimates of the total (truncated) magnitude. The black curve shows results for the DR9 E+S0 subset while the gray curve shows the results for the larger (∼ 60×) PyMorph DR7 E+S0 sample. These curves show that SDSS is fainter, and this difference increases for the largest (left) and most luminous (right) galaxies.This is a consequence of three effects:(i) the SDSS sky is brighter, so galaxies with large angular radii tend to have their sizes reduced by a bigger factor, as a result of which less light is assigned to the galaxy; (ii) the total magnitude is computed by integrating the surface brightness profile, and our model of how the SDSS truncates this integral (equation <ref> and related discussion) may not be accurate; (iii) the SDSS and PyMorph fitting routines are systematically different.To remove the latter two effects, the yellow curve shows the difference between forcing PyMorph to use the SDSS sky values when fitting and the original PyMorph value.Since both estimates are from PyMorph DR9, effects (ii) and (iii) have been removed, so the yellow curves differ from zero entirely because of the differences in sky values (the SDSS sky is brighter).Moreover, the fact that these yellow curves agree with the previous black ones to better than 0.01 mags strongly suggests that we have modelled the SDSS truncation algorithm correctly: PyMorph_ SDSSsky,deV,trunc is a good proxy for SDSS_ deV.(As an aside, this means that the good agreement at magnitudes fainter than ∼ -22 mag in the top panels of Figures <ref> and <ref> is fortuitous, at least where the contribution from E+S0s is significant.)Figure <ref> shows that while the SDSS sky is brighter than PyMorph, the B11 sky is in excellent agreement across the entire population.The bottom panels of Figure <ref> show that if PyMorph is forced to use the B11 sky estimate rather than its own (in practice, this means PyMorph is made to fit the B11 sky-subtracted image provided on the SDSS website, while forcing its own additional sky estimate to be zero across the image), then the median difference in magnitude is negligible.Notice that the scatter around the median is less than 0.03 mags; this level of agreement is remarkable. Comparison of the top and bottom panels shows that the sky can introduce biases of order 0.1 mags or more for the most luminous objects when fitting deVaucouleurs profiles.§.§ Sky-related biases in Ser and SerExp fitsWe now consider sky-related biases when fitting other models. Figure <ref> shows results for PyMorph SerExp fits to all galaxies as the restriction to E+S0s is no longer necessary.The yellow curves in the different panels show that the brighter SDSS sky biases the estimated SerExp magnitude fainter, and this bias is most severe for the largest (top left) and/or most luminous (top right) galaxies; it also biases the half-light radii and B/T values to smaller values (bottom left and right, respectively). For PyMorph SerExp fits the biases arising from the SDSS sky are significantly larger than when fitting deVaucouleurs profiles. However, there are no such biases associated with the B11 sky values (red curves).On the other hand, although the scatter is similar to that in the bottom left panel of Figure <ref>, degeneracies between the fitted SerExp parameters and the fitted sky contribute to increased scatter at high luminosities. Figure <ref> shows a similar analysis of Sersic rather than SerExp fits.In this case, there is only one component, so the bottom right panel shows the Sersic index n rather than B/T.Again, the SDSS sky biases the estimated Ser magnitude fainter, and this bias is most severe for the most luminous (top left) and/or largest (top right) galaxies; note that now the bias can be as large as 0.4 mags – substantially larger than when fitting deVaucouleurs profiles.The SDSS sky also biases the half-light radii and Sersic indices to smaller values (bottom left and right, respectively).While there are no such biases associated with the B11 sky values, there are hints of a small bias at the largest angular sizes and luminosities. Since there are fewer free parameters compared to SerExp, and therefore fewer degeneracies, we would expect the scatter around the zero-median to be smaller. This is indeed the case for intermediate and low luminosity galaxies which usually have a Sersic index n < 4. A Sersic fit with a higher n is more sensitive to differences in the background sky (Meert et al. 2013). Thus, the larger scatter observed at large sizes and/or luminosities is due to the fact that the most luminous galaxies usually have n ≥ 4. The results of this subsection have an interesting connection to recent work.D'Souza et al. (2015) state that image stacking is essential for recovering unbiased estimates of the total light.Their stacks were of DR9 sky subtracted images, meaning that they assumed the B11 sky estimate was correct.The results in the bottom halves of each panel in Figures <ref>–<ref> were based on analyses of individual images. Since no stacking was performed when fitting, the lack of bias between the full PyMorph values and those when the sky is fixed to that of B11 shows that stacking is not a prerequisite for obtaining unbiased results.In this context, it is interesting to compare the difference between PyMorph SerExp and SDSS Model magnitudes.Bernardi et al. (2017a) have already shown that the median difference is the same as what D'Souza et al. find from their stacking analyses (see their Figure 2).But they left open the question of the scatter around the median.Since their work used SerExp magnitudes in which PyMorph also fit for the sky, it is possible that some of the scatter is reduced when PyMorph is forced to use the B11 sky.The top panel of Figure <ref> shows that this is not the case:the differences between PyMorph SerExp and SDSS Model magnitudes when PyMorph fits its own sky and when the sky is fixed to that of B11 are very similar, not just in the median but also the scatter around it.(Because we are using truncated magnitudes, comparing PyMorph to SDSS quantities, the offset from zero is due to differences in sky and fitted-model only.)This strongly suggests that the scatter reflects true differences between SerExp and Model (i.e. deV) models; it is not dominated by degeneracies arising from fitting the sky simultaneously.To remove trends which arise from morphology, the bottom panel shows a similar analysis for the subset of galaxies classified as E+S0s.While the trends differ especially at low luminosities – where non-E+S0s begin to dominate in the top panel – it is still true that changing from PyMorph to B11 sky values makes little difference.§ DEPENDENCE ON FITTED MODELHaving shown the large biases associated with the SDSS, we now turn exclusively to PyMorph values.Recall that the PyMorph sky values are essentially the same for all fitted models (Figure <ref>), so that comparison of different PyMorph fits show how the luminosity and size depend on the functional form assumed for the surface brightness profile.Also, when comparing results from deVaucouleurs profiles we show results for E+S0s only to avoid the issue of biases which arise from using a functional form which is known to provide a poor fit.Figure <ref> shows that SerExp fits to E+S0s return more light than deV fits especially at large luminosities (up to ∼ 0.2 mag); when shown as a function of SerExp luminosity, the difference is largest for the most luminous galaxies.(This analysis was done using the DR7 E+S0s, for which PyMorph reductions are available, since it is much larger (∼ 60×) than the subset of DR9 galaxies on which PyMorph was rerun. The result from the DR9 subset is noisier, but otherwise very similar, so we have not included a separate figure showing it.)The difference due to fitting different models is similar in amplitude to that in Figure <ref>, which was due to differences in the estimated sky.However, the dependence on choice of regression is more dramatic here than in Figures <ref> and <ref> because there the effects of the sky somewhat compensated for the difference in profiles.By using only PyMorph quantities here, the sky effects have been removed.Figure <ref> shows a similar comparison, but now between SerExp and Ser fits to DR7 E+S0s. Clearly, SerExp is about 0.1 mags fainter and 10% smaller across the E+S0 population.Finally, Figure <ref> compares SerExp and Ser fits to the full DR7 population.At high luminosities, this figure is very similar to the previous one, because most high luminosity galaxies are E+S0s.However, there are small differences at low luminosities.These indicate that Sersic luminosities and sizes of non-E+S0s must be fainter and smaller than the corresponding SerExp values.The differences between the cyan and magenta curves in Figures <ref> and <ref> at the high luminosity end strongly suggest that the most luminous galaxies have different surface brightness profiles from the bulk of the population.This can be understood as follows.Suppose we have two populations, both of which span the same range of deV.Assume that, for one, Ser=deV, but that Ser=deV-m for the other (i.e. Ser is m mags brighter).Let f denote the fraction of objects in this second population.Then, a plot of Δ M ≡ deV- Ser when shown as a function of deV will look like two horizontal lines, one lying m mags above the other, like this:=.The average of Δ M when shown as a function of deV will equal fm. However, when shown as a function of Ser, the second population will be displaced brightwards along the x-axis:  -.As a result, where the two populations overlap, the mean Δ M will still be fm, but at the brightest Ser the mean will be m.If the second population only spans a limited range of deV, -=-, then the average as a function of deV will show curvature, and may not even be monotonic, whereas the other may still be monotonic: –=.Alternatively, suppose that when plotted as a function of Ser, Δ M is made of two populations:   |.Then, when plotted as a function deV, this will look like   \.Again, the plot versus Ser will be monotonic, whereas that versus deV will not.In practice, this mix of populations means that if one wishes to use the mean of Δ M as a measure of the difference between deV and Ser, then one must specify which variable was being held fixed (we made a similar point in the context of Figures <ref> and <ref>). § CONCLUSIONSIn both SDSS DR7 and DR9, PyMorph returns brighter estimates of the total light of a galaxy than either SDSS Model or cModel magnitudes (Figures <ref> and <ref>).While the SDSS values have changed slightly between DR7 and DR9, the PyMorph fits to the DR7 release provided by Meert et al. (2015, 2016) remain accurate for DR9 as well (Figures <ref> and <ref>).Some of the difference with respect to the SDSS arise from the fact that the SDSS value for the total brightness comes from truncating the integral over the surface brightness profile (Figures <ref>, <ref>, and <ref>).We believe we understand the truncation algorithm (Figure <ref> and related discussion), and so in all our subsequent comparisons with the SDSS, we have truncated the PyMorph values using a similar algorithm (equation <ref>) so that truncation plays no further role in the PyMorph-SDSS differences.The sky estimated by PyMorph is almost completely independent of the model used to fit the galaxy (Figure <ref>).The PyMorph sky estimates are fainter than those of the SDSS DR7 or DR9 pipelines (Figure <ref>), but are in excellent agreement with the estimates of B11 (Figure <ref>).The difference in sky accounts for about half of the discrepancy shown in Figures <ref> and <ref>.In addition, there is an overall offset of about 0.07 mags which comes from the fact that the SDSS value for the total brightness comes from truncating the integral over the surface brightness profile (Figure <ref>).The remainder arises from fitting different models. Use of the SDSS sky biases luminosities and half-light radii to lower values; in the main SDSS galaxy sample these biases are significant (a few tenths of a magnitude) at large luminosities:they matter not just for nearby galaxies.The biases become even worse when allowing the model more freedom to fit the surface brightness profile (Figures <ref>–<ref>).When PyMorph sky values are used, the SerExp fits to E+S0s return more light than deV fits especially at large luminosities (up to ∼ 0.2 mag), but less light than Ser fits (Figure <ref>).For non-E+S0s, which are dominant towards lower luminosities, Sersic luminosities and sizes are slightly fainter and smaller than SerExp (Figure <ref>).Our findings show that, especially at large luminosities, SDSS pipeline values should not be used:PyMorph estimates are much more reliable.Of these, Meert et al. (2013) and Bernardi et al. (2014) have already shown that the SerExp values are to be preferred.The PyMorph SerExp values are also consistent with results obtained via the stacking analysis of D'Souza et al. (2015) (Figure<ref>; see also Figure 2 in Bernardi et al. 2017a). This is reassuring because the two analyses are very different.However, this does raise the question of why SerExp is better than SDSS pipeline photometry.E.g., since the largest discrepancies occur at high luminosities, and the most luminous galaxies are preferentially found in clusters, is it possible that the SerExp fits are different because the second component is actually fitting intercluster light?Bernardi et al. (2017b) show that for the vast majority of massive galaxies this is almost certainly not the main reason for the difference.The assembly history of a galaxy is expected to leave an imprint on its surface brightness profile. Indeed, we find significant evidence that the surface brightness profiles of the most luminous galaxies suggest that they are a distinct population (Figures <ref>, <ref>, <ref> and <ref>).Therefore, we hope our results will inform studies of the assembly histories of the most massive galaxies. §.§ AcknowledgementsWe thank R.K. Sheth and V. Vikram for helpful discussions, and the referee for a helpful and competent report.[] Aihara et al., 2011, ApJS, 193, 29[] Abazajian, et al. 2009, ApJS, 182, 543[] Ahn et al., 2012, ApJS, 203, 21[] Bernardi M., Hyde J. B., Sheth R. K., Miller C. J., Nichol R. C., 2007, AJ, 133, 1741 [] Bernardi M., Roche N., Shankar F., Sheth R. K., 2011, MNRAS, 412, L6[] Bernardi M., Meert A., Sheth R. K., Vikram, Huertas-Company M., Mei S., Shankar F., 2013, MNRAS, 436, 697 [] Bernardi M., Meert A., Vikram V., Huertas-Company M., Mei S., Shankar F., Sheth R. K., 2014, MNRAS, 443, 874[] Bernardi M., Meert A., Sheth R. K., Huertas-Company M., Maraston C., Shankar F., Vikram V., 2016a, MNRAS, 455, 4122[] Bernardi M., Meert A., Sheth R. K., Fischer, J.-L., Huertas-Company, M., Maraston, C., Shankar, F., Vikram, V., 2017a, MNRAS, in press (arXiv:1604.01036)[] Bernardi M., Fischer, J.-L., Sheth R. K., Meert A., Huertas-Company, Shankar, F., 2017b, MNRAS, submitted[] Blanton M. R., Kazin E., Muna D., Weaver B. A., Price-Whelan A., 2011, AJ, 142, 31 (B11)[] Cooray A., Sheth R. K., 2002, Phys. Rep., 372, 1[] D'Souza R., Vegetti S., Kauffmann G. A. M., 2015, MNRAS, 454, 4027[] Huertas-Company M., Aguerri J. A. L, Bernardi M., Mei S. & Sánchez Almeida J. 2011, A&A, 525, 157[] Kelvin L. S., et al., 2012, MNRAS, 421, 1007[] Meert A., Vikram V., Bernardi M., 2013, MNRAS, 433, 1344[] Meert A., Vikram V., Bernardi M., 2015, MNRAS, 446, 3943[] Meert A., Vikram V., Bernardi M., 2016, MNRAS, 455, 2440[] Shankar F., et al., 2014, ApJL, 797, L27[] Simard L., Mendel J. T., Patton D. R., Ellison S. L., McConnachie A. W., 2011, ApJS, 196, 11[] Vikram V., Wadadekar Y., Kembhavi A. K., Vijayagovindan G. V., 2010, MNRAS, 409, 1379
http://arxiv.org/abs/1702.08526v1
{ "authors": [ "J. -L. Fischer", "M. Bernardi", "A. Meert" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170227210012", "title": "Comparing PyMorph and SDSS photometry. I. Background sky and model fitting effects" }
[ Nonparanormal Information EstimationShashank Singhsss1@andrew.cmu.edu Carnegie Mellon University,5000 Forbes Ave., Pittsburgh, PA 15213 USA Barnabás Póczosbapoczos@cs.cmu.edu Carnegie Mellon University,5000 Forbes Ave., Pittsburgh, PA 15213 USAmutual information,entropy,nonparanormal,Gaussian copula,Spearman,Kendall 0.3in] We study the problem of using i.i.d. samples from an unknown multivariate probability distribution p to estimate the mutual information of p. This problem has recently received attention in two settings: (1) where p is assumed to be Gaussian and (2) where p is assumed only to lie in a large nonparametric smoothness class. Estimators proposed for the Gaussian case converge in high dimensions when the Gaussian assumption holds, but are brittle, failing dramatically when p is not Gaussian. Estimators proposed for the nonparametric case fail to converge with realistic sample sizes except in very low dimensions. As a result, there is a lack of robust mutual information estimators for many realistic data. To address this, we propose estimators for mutual information when p is assumed to be a nonparanormal (a.k.a., Gaussian copula) model, a semiparametric compromise between Gaussian and nonparametric extremes. Using theoretical bounds and experiments, we show these estimators strike a practical balance between robustness and scaling with dimensionality.§ INTRODUCTIONThis paper is concerned with the problem of estimating entropy or mutual information of an unknown probability density p over ^D, given n i.i.d. samples from p. Entropy and mutual information are fundamental information theoretic quantities, and consistent estimators for these quantities have a host of applications within machine learning, statistics, and signal processing. For example, entropy estimators have been used for goodness-of-fit testing <cit.>,parameter estimation in semi-parametric models <cit.>, texture classification and image registration <cit.>,change point detection <cit.>, and anomaly detection in networks <cit.>. Mutual information is a popular nonparametric measure of dependence, whose estimators have been used in feature selection <cit.>, clustering <cit.>, learning graphical models <cit.>, fMRI data processing <cit.>, prediction of protein structures <cit.>, boosting and facial expression recognition <cit.>, and fitting deep nonlinear models <cit.>. Estimators for both entropy and mutual information have been used in independent component and subspace analysis <cit.>.Motivated by these and other applications, several very recent lines of work (discussed in [sec:related_work]Section <ref>) have studied information estimation,[We will collectively call the closely related problems of entropy and mutual information estimation information estimation.] focusing largely on two settings: * Gaussian Setting: If p is known to be Gaussian, there exist information estimators with mean squared error (MSE) at most 2log(1 - D/n) and an (almost matching) minimax lower bound of 2D/n <cit.>.* Nonparametric Setting: If p is assumed to lie in a nonparametric smoothness class, such an s-order[Here, s encodes the degree of smoothness, roughly corresponding to the number of continuous derivatives of p.] Hólder or Sobolev class, then the minimax MSE is of asymptotic order ≍max{ n, n^-8s/4s + D} <cit.>. In the Gaussian setting, consistent estimation is tractable even in the high-dimensional case where D increases fairly quickly with n, as long as D/n → 0. However, optimal estimators for the Gaussian setting rely heavily on the assumption of joint Gaussianity, and their performance can degrade quickly when the data deviate from Gaussian. Especially in high dimensions, it is unlikely that data are jointly Gaussian, making these estimators brittle in practice. In the nonparametric setting, the theoretical convergence rate decays exponentially with D, and, it has been found empirically that information estimators for this setting fail to converge at realistic sample sizes in all but very low dimensions. Also, most nonparametric estimators are sensitive to tuning of bandwidth parameters, which is challenging for information estimation, since no empirical error estimate is available for cross-validation.Given these factors, though the Gaussian and nonparametric cases are fairly well understood in theory, there remains a lack of practical information estimators for the common case where data are neither exactly Gaussian nor very low dimensional. The main goal of this paper is to fill the gap between these two extreme settings by studying information estimation in a semiparametric compromise between the two, known as the “nonparanormal” (a.k.a. “Gaussian copula”) model (see [def:nonparanormal]Definition <ref>). The nonparanormal model, analogous to the additive model popular in regression <cit.>, limits complexity of interactions among variables but makes minimal assumptions on the marginal distribution of each variable. The result scales better with dimension than nonparametric models, while being more robust than Gaussian models.Paper Organization: [sec:problem_statement]Section <ref> gives definitions and notation to formalize the nonparanormal information estimation problem. [sec:related_work]Section <ref> discusses the history of the nonparanormal model and prior work on information estimation, motivating our contributions. [sec:estimators]Section <ref> proposes three estimators, while [sec:main_results]Section <ref> presents our theoretical error bounds, proven in the Appendix.[sec:empirical]Section <ref> provides simulation results. While most of the paper discusses mutual information estimation, [sec:entropy]Section <ref> discusses additional considerations arising in entropy estimation.[sec:conc_and_future]Section <ref> presents some concluding thoughts and avenues for future work.§ PROBLEM STATEMENT AND NOTATION There are a number of distinct generalizations of mutual information to more than two variables. The definition we consider is simply the difference between the sum of marginal entropies and the joint entropy:(Multivariate mutual information) Let X_1,…,X_D be -valued random variables with a joint probability density p : ^D → [0, ∞) and marginal densities p_1,...,p_D : → [0, ∞). The multivariate mutual information I(X) of X = (X_1,…,X_D) is defined byI(X) := _X ∼ p[ log(p(X)/∏_j = 1^D p_j(X_j)) ]= ∑_j = 1^D H(X_j) - H(X),where H(X) = -_X ∼ p [log p(X)] denotes entropy of X. This notion of multivariate mutual information, originally due to <cit.> (who called it “total correlation”) measures total dependency, or redundancy, within a set of D random variables. It has also been called the “multivariate constraint” <cit.> and “multi-information” <cit.>. Many related information theoretic quantities can be expressed in terms of I(X), and can thus be estimated using estimators of I(X). Examples include pairwise mutual information I(X,Y) = I((X,Y)) - I(X) - I(Y), which measures dependence between (potentially multivariate) random variables X and Y, conditional mutual informationI(X|Z) = I((X,Z)) - ∑_j = 1^D I((X_j,Z)),which is useful for characterizing how much dependence within X can be explained by a latent variable Z <cit.>, and transfer entropy (a.k.a. directed information) T_X → Y, which measures predictive power of one time series X on the future of another time series Y. I(X) is also related to entropy via Eq. (<ref>), but, unlike the above quantities, this relationship depends on the marginal distributions of X, and hence involves some additional considerations, as discussed in [sec:entropy]Section <ref>.We now define the class of nonparanormal distributions, from which we assume our data are drawn.(Nonparanormal distribution, a.k.a. Gaussian copula model) A random vector X = (X_1,…,X_D)^T is said to have a nonparanormal distribution (denoted X ∼𝒩𝒫𝒩(Σ; f)) if there exist functions {f_j}_j = 1^D such that each f_j : → is a diffeomorphism [A diffeomorphism is a continuously differentiable bijection g : → R ⊆ such that g is continuously differentiable.] and f(X) ∼𝒩(0, Σ), for some (strictly) positive definite Σ∈^D × D with 1's on the diagonal (i.e., each σ_j = Σ_j,j = 1). [Setting [ f(X) ] = 0 and each σ_j = 1 ensures model identifiability, but does not reduce the model space, since these parameters can be absorbed into the marginal transformation f.] Σ is called the latent covariance of X and f is called the marginal transformation of X.The nonparanormal family relaxes many constraints of the Gaussian family. Nonparanormal distributions can be multi-modal or heavy-tailed, can encode noisy nonlinear dependencies amongst variables, and need not be supported on ^D. Assumptions made by a nonparanormal model on the marginals are minimal; any desired continuously differentiable marginal cumulative distribution function (CDF) F_i of the variable X_i corresponds to the marginal transformation f_i(x) = Φ(F_i(x)) (where Φ is the standard normal CDF). As examples, for a Gaussian variable Z, the 2-dimensional case, X_1∼𝒩(0,1), and X_2 = T(X_1 + Z) is completely captured by a Gaussian copula when T(x) = x^3, T = tanh, T = Φ, or any other diffeomorphism. On the other hand, the limits of the Gaussian copula appear, for example, when T(x) = x^2, which is not bijective; then, if [Z] = 0, the Gaussian copula approximation of (X_1,X_2) will model X_1 and X_2 as independent.We are now ready to formally state our problem:Formal Problem Statement: Given n i.i.d. samples X_1,...,X_n ∼𝒩𝒫𝒩(Σ;f), where Σ and f are both unknown, we would like to estimate I(X).Other notation: D denotes the dimension of the data (i.e., Σ∈^D × D and f : ^D →^D). For a positive integer k, [k] = {1,...,k} denotes the set of positive integers less than k (inclusive). For consistency, where possible, we use i ∈ [n] to index samples and j ∈ [D] to index dimensions (so that, e.g., X_i,j denotes the j^th dimension of the i^th sample). Given a data matrix X ∈^n × D, our estimators depend on the empirical rank matrixR ∈ [n]^n × D withR_i,j := ∑_k = 1^n 1_{X_i,j≥ X_k,j}. For a square matrix A ∈^k × k, |A| denotes the determinant of A, A^T denotes the transpose of A, andA_2 := max_x ∈^k x_2 = 1Ax_2and A_F := √(∑_i, j ∈ [k] A_i,j^2)denote the spectral and Frobenius norms of A, respectively. When A is symmetric, λ_1(A) ≥λ_2(A) ≥⋯≥λ_D(A) are its eigenvalues.§ RELATED WORK AND OUR CONTRIBUTIONS§.§ The Nonparanormal Nonparanormal models have been used for modeling dependencies among high-dimensional data in a number of fields, such as graphical modeling of gene expression data <cit.>, of neural data <cit.>, and of financial time series <cit.>, extreme value analysis in hydrology <cit.>, and informative data compression <cit.>.Besides being more robust generalizations of Gaussians, nonparanormal distributions are also theoretically motivated in certain contexts. For example, the output Z of a neuron is often modeled by feeding a weighted linear combination Y = ∑_k = 1^N w_k X_k of inputs into a nonlinear transformation Z = f(Y). When the components of X are independent, the central limit theorem suggests Y is approximately normally distributed, and hence Z is approximately nonparanormally distributed <cit.>.With one recent exception <cit.>, previous information estimators for the nonparanormal case <cit.>, rely on fully nonparametric information estimators as subroutines, and hence suffer strongly from the curse of dimensionality. Very recently, <cit.> proposed what we believe is the first mutual information estimator tailored specifically to the nonparanormal case; their estimator is equivalent to one of the estimators (I_G, described in Section <ref>) we study. However, they focused on its applications to neuroimaging data analysis, and did not study its performance theoretically or empirically. §.§ Information EstimationOur motivation for studying the nonparanormal family comes from trying to bridge two recent approaches to information estimation. The first has studied fully non-parametric entropy estimation, assuming only that data are drawn from a smooth probability density p, where smoothness is typically quantified by a Hölder or Sobolev exponent s ∈ (0, ∞), roughly corresponding to the continuous differentiability of s. In this setting, the minimax optimal MSE rate has been shown by <cit.> to be O ( max{ n, n^-8s/4s + D}). This rate slows exponentially with the dimension D, and, while many estimators have been proposed <cit.> for this setting, their practical use is limited to a few dimensions[“Few” depends on s and n, but <cit.> suggest nonparametric estimators should only be used with D at most 4-6. <cit.> tried using several nonparametric information estimators on the Communities and Crime UCI data set (n = 2195, D = 10), but found all too unstable to be useful.].The second area is in the setting where data are assumed to be drawn from a truly Gaussian distribution. Here the high-dimensional case is far more optimistic. While this case had been studied previously <cit.>, <cit.> recently provided a precise finite-sample analysis based on deriving the exact probability law of the log-determinant log|Σ̂| of the scatter matrix Σ̂. From this, they derived a deterministic bias correction, giving an estimator for which they prove an MSE upper bound of 2log( 1 - D/n) and a high-dimensional central limit theorem for the case D →∞ as n →∞ (but D < n).<cit.> also prove a minimax lower bound of 2D/n on MSE, with several interesting consequences. First, consistent information estimation is possible only if D/n → 0. Second, since, for small x, log(1 - x) ≈ x, this lower bound essentially matches the above upper bound when D/n is small. Third, they show this lower bound holds even when restricted to diagonal covariance matrices. Since the upper bound for the general case and the lower bound for the diagonal case essentially match, it follows that Gaussian information estimation is not made easier by structural assumptions such as Σ being bandable, sparse, or Toeplitz, as is common in, for example, stationary Gaussian process models <cit.>.This 2D/n lower bound extends to our more general nonparanormal setting. However, we provide a minimax lower bound suggesting that the nonparanormal setting is strictly harder, in that optimal rates depend on Σ. Our results imply nonparanormal information estimation does become easier if Σ is assumed to be bandable or Toeplitz.A closely related point is that known convergence rates for the fully nonparametric case require the density p to be bounded away from 0 or have particular tail behavior, due to singularity of the logarithm near 0 and resulting sensitivity of Shannon information-theoretic functionals to regions of low but non-zero probability. In contrast, <cit.> need no lower-bound-type assumptions in the Gaussian case. In the nonparanormal case, we show some such condition is needed to prove a uniform rate, but a weaker condition, a positive lower bound on λ_D(Σ), suffices.The main contributions of this paper are the following: * We propose three estimators, Î_G, Î_ρ, and Î_τ,[<cit.> proposed Î_G for use in neuroimaging data analysis. To the best of our knowledge, Î_ρ and Î_τ are novel.] for the mutual information of a nonparanormal distribution.* We prove upper bounds, of order O(D^2/(λ_D^2(Σ)n)) on the mean squared error of Î_ρ, providing the first upper bounds for a nonparanormal information estimator. This bound suggests nonparanormal estimators scale far better with D than nonparametric estimators.* We prove a minimax lower bound suggesting that, unlike the Gaussian case, difficulty of nonparanormal information estimation depends on the true Σ.* We give simulations comparing our proposed estimators to Gaussian and nonparametric estimators. Besides confirming and augmenting our theoretical predictions, these help characterize the settings in which each nonparanormal estimator works best.* We present entropy estimators based on Î_G, Î_ρ, and Î_τ. Though nonparanormal entropy estimation requires somewhat different assumptions from mutual information estimation, we show that entropy can also be estimated at the rate O(D^2/(λ_D^2(Σ)n)).§ NONPARANORMAL INFORMATION ESTIMATORSIn this section, we present three different estimators, I_G, I_ρ, and I_τ, for the mutual information of a nonparanormal distribution. We begin with a lemma providing common motivation for all three estimators.Since mutual information is invariant to diffeomorphisms of individual variables, it is easy to see that the mutual information of a nonparanormal random variable is the same as that of the latent Gaussian random variable. Specifically:(Nonparanormal mutual information): Suppose X ∼𝒩𝒫𝒩(Σ; f). Then,I(X) = -1/2log|Σ|. Lemma <ref> shows that mutual information of a nonparanormal random variable depends only the latent covariance Σ; the marginal transformations are nuisance parameters, allowing us to avoid difficult nonparametric estimation; the estimators we propose all plug different estimates of Σ into Eq. (<ref>), after a regularization step described in Section <ref>. §.§ Estimating Σ by GaussianizationThe first estimator Σ̂_G of Σ proceeds in two steps. First, the data are transformed to have approximately standard normal marginal distributions, a process <cit.> referred to as “Gaussianization”. By the nonparanormal assumption, the Gaussianized data are approximately jointly Gaussian. Then, the latent covariance matrix is estimated by the empirical covariance of the Gaussianized data.More specifically, letting Φ denote the quantile function of the standard normal distribution and recalling the rank matrix R defined in (<ref>), the Gaussianized dataX̃_i,j := Φ( R_i,j/n + 1)(fori ∈ [n], j ∈ [D])are obtained by transforming the empirical CDF of the each dimension to approximate Φ. Then, we estimate Σ by the empirical covariance Σ̂_G := 1/n∑_i = 1^n X̃_i X̃_i^T. §.§ Estimating Σ by rank correlation The second estimator actually has two variants, I_ρ and I_τ, respectively based on relating the latent covariance to two classic rank-based dependence measures, Spearman's ρ and Kendall's τ. For two random variables X and Y with CDFs F_X,F_Y : → [0, 1], ρ and τ are defined byρ(X, Y) := (F_X(X),F_Y(Y))and τ(X, Y) := ((X - X'), (Y - Y')),respectively, where(X, Y) = [(X - [X])(Y - [Y])]/√([X][Y])denotes the standard Pearson correlation operator and (X',Y') is an IID copy of (X,Y). ρ and τ generalize to the D-dimensional setting in the form of rank correlation matrices ρ, τ∈ [-1,1]^D × D with ρ_i,j = ρ(X_i, X_j) and τ_i,j = τ(X_i, X_j) for each i ∈ [n],j ∈ [D].I_ρ and I_τ are based on a classical result relating the correlation and rank-correlation of a bivariate Gaussian:<cit.>: Suppose (X,Y) has a Gaussian joint distribution with covariance Σ. Then,(X, Y)= 2sin(π/6ρ(X, Y) )= sin( π/2τ(X, Y) ).ρ and τ are often preferred to Pearson correlation for their relative robustness to outliers and applicability to non-numerical ordinal data. While these are strengths here as well, the main reason for their relevance is that they are invariant to marginal transformations (i.e., for diffeomorphisms f, g : →, ρ(f(X),g(Y)) = ±ρ(X, Y) and τ(f(X),g(Y)) = ±τ(X,Y)). As a consequence, the identity provided in Theorem <ref> extends unchanged to the case (X,Y) ∼𝒩𝒫𝒩(Σ;f). This suggests an estimate for Σ based on estimating ρ or τ and plugging this element-wise into the transform x ↦ 2sin( π/6 x ) or x ↦sin( π/2 x ), respectively. Specifically, Σ_ρ is defined byΣ̂_ρ := 2 sin( π/6ρ̂),where ρ̂= (R)is the empirical correlation of the rank matrix R, and sine is applied element-wise. Similarly, Σ̂_τ := sin( π/2τ̂), whereτ̂_j,k := 1/n2∑_i ≠ℓ∈ [n](X_i,j - X_ℓ,j)(X_i,k - X_ℓ,k).§.§ Regularization and estimating I Unfortunately, unlike usual empirical correlation matrices, none of Σ̂_G, Σ̂_ρ, or Σ̂_τ is almost surely strictly positive definite. As a result, directly plugging into the mutual information functional (<ref>) may give ∞ or even be undefined.To correct for this, we propose a regularization step, in which we project each estimated latent covariance matrix onto the (closed) cone 𝒮(z) of symmetric matrices with minimum eigenvalue z > 0. Specifically, for any z > 0, let𝒮(z) := { A ∈^D × D : A = A^T, λ_D(A) ≥ z }.For any symmetric matrix A ∈^D × D with eigendecomposition Σ̂= Q Λ Q (i.e., QQ^T = Q^TQ = I_D and Λ is diagonal), the projection A_z of A onto 𝒮(z) is defined as A_z := Q Λ_z Q, where Λ_z is the diagonal matrix with j^th nonzero entry ( Λ_z )_j,j = max{ z, Λ_j,j}. We call this a “projection” because A_z is precisely the Frobenius norm projection of A onto 𝒮(z) (see, e.g., <cit.>): A_z = min_B ∈^D × DA - B_F.Applying this regularization to Σ̂_G, Σ̂_ρ, or Σ̂_τ gives a strictly positive definite estimate Σ̂_G,z, Σ̂_ρ,z, or Σ̂_τ,z, respectively, of Σ. We can then estimate I by plugging this into Equation (<ref>), giving our three estimators:Î_G,z := -1/2log| Σ̂_G,z|, Î_ρ,z := -1/2log| Σ̂_ρ,z| and Î_τ,z := -1/2log| Σ̂_τ,z|. § UPPER BOUNDS ON THE ERROR OF Î_Ρ,ZHere, we provide finite-sample upper bounds on the error of the estimator Î_ρ based on Spearman's ρ. Proofs are given in the Appendix.We first bound the bias of the estimator:Suppose X_1,...,X_n i.i.d.∼𝒩𝒫𝒩(Σ;f). Then, there exists a constant C > 0 such that, for any z > 0, the bias of Î_ρ,z is at most| [ Î_ρ,z] - I | ≤ C ( D/z√(n) + log|Σ_z|/|Σ|),where Σ_z is the projection of Σ onto 𝒮(z).The first term of the bias stems from nonlinearity of the log-determinant function in Equation <ref>, which we analyze via Taylor expansion. The second term,log|Σ_z|/|Σ| = ∑_λ_j(Σ) < zlog( z/λ_j(Σ)),is due to the regularization step and is actually, but is difficult to simplify or bound without further assumptions on the spectrum of Σ and a choice of z, which we discuss later. We now turn to bounding the variance of Î_ρ,z. We first provide an exponential concentration inequality for Î_ρ,z around its expectation, based on McDiarmid's inequality:Suppose X_1,...,X_n i.i.d.∼𝒩𝒫𝒩(Σ;f). Then, for any z, > 0,[ | Î_ρ,z - [ Î_ρ,z] | > ] = 2 exp( - n z^2^2/18 π^2 D^2). Such exponential concentration bounds are useful when one wants to simultaneously bound the error of multiple uses of an estimator, and hence we present it separately as it may be independently useful. However, for the purpose of understanding convergence rates, we are more interested in the variance bound that follows as an easy corollary: Suppose X_1,...,X_n i.i.d.∼𝒩𝒫𝒩(Σ;f). Then, for any z > 0, the variance of Î_ρ,z is at most[ Î_ρ,z] ≤36 π^2 D^2/z^2 n. Given these bias and variance bounds, a bound on the MSE of Î_ρ,z follows via the usual bias-variance decomposition:Suppose X ∼𝒩𝒫𝒩(Σ;f). Then, there exists a constant C such that [ ( Î_ρ,z - I )^2 ] ≤ C ( D^2/z^2n + log^2 |Σ_z|/|Σ|).A natural question is now how to optimally select the regularization parameter z. While the bound (<ref>) is clearly convex in z, it depends crucially on the unknown spectrum of Σ, and, in particular, on the smallest eigenvalues of Σ. As a result, it is difficult to choose z optimally in general, but we we can do so for certain common subclasses of covariance matrices. For example, if Σ is Toeplitz or bandable (i.e., for some c ∈ (0,1), all |Σ_i,j| ≤ c^|i - j|), then the smallest eigenvalue of Σ can be bounded below <cit.>. When Σ is bandable, as we show in the Appendix, this bound can be independent of D. In these cases, the following somewhat simpler MSE bound can be used:Suppose X ∼𝒩𝒫𝒩(Σ;f), and suppose z ≤λ_D(Σ). Then, there exists a constant C > 0 such that[ ( Î_ρ,z - I )^2 ] ≤CD^2/z^2n.§ LOWER BOUNDS IN TERMS OF Σ When the data X_1,...,X_n i.i.d∼𝒩(0,Σ) are truly Gaussian, using the plug-in estimatorÎ = -1/2log| Σ̂| Σ̂= 1/n∑_i = 1^n X_i X_i^Tis the empirical covariance matrix), <cit.> showed that the distribution of Î - I is independent of the true correlation matrix Σ. This follows fromthe “stability” of Gaussians (i.e., that nonsingular linear transformations of Gaussian random variables are Gaussian). In particular,Î - I = log|Σ̂| - log|Σ| = log|Σ^-1/2Σ̂Σ^-1/2|,and Σ^-1/2Σ̂Σ^-1/2 has the same distribution as logΣ̂ does in the special case that Σ = I_D is the identity. This property is both somewhat surprising, given that I →∞ as |Σ| → 0, and useful, leading to a tight analysis of the error of Î and confidence intervals that do not depend on Σ. It would be convenient if any nonparanormal information estimators satisfied this property. Unfortunately, the main result of this section is a negative one, showing that this property is unlikely to hold without additional assumptions: Consider the 2-dimensional caseX_1,...,X_n i.i.d∼𝒩(0,Σ),with Σ = [ 1 σ; σ 1 ],and let σ_* ∈ (0,1). Suppose an estimator Î = Î(R) of I_σ = -1/2log(1 - σ^2) is a function of the empirical rank matrix R ∈^n × 2 of X. Then, there exists a constant C > 0, depending only n, such that the worst-case MSE of Î over σ∈ (0,σ_*) satisfiessup_σ∈ (0,σ^*)[ ( Î(R) - I_σ)^2 ]≥1/64( C - log(1 - σ_*^2) )^2Clearly, this lower bound tends to ∞ as σ→ 1. As written, this result lower bounds the error of rank-based estimators in the Gaussian case when σ≈ 1.However, to the best of our knowledge, all methods for estimating Σ in the nonparanormal case are functions of R, and prior work <cit.> has shown that the rank matrix R is a generalized sufficient statistic for Σ (and hence for I) in the nonparanormal model. Thus, it is reasonable to think of lower bounds for rank-based estimators in the Gaussian case as lower bounds for any estimator in the nonparanormal case.The proof of this result is based on the simple observation that the rank matrix can take only finitely many values. Hence, as σ→ 1, R tends to be perfectly correlated, providing little information about σ, whereas the dependence of the estimand I_σ on σ increases sharply. This is intuition is formalized in the Appendix using Le Cam's lemma for lower bounds in two-point parameter estimation problems.§ EMPIRICAL RESULTS We compare 5 mutual information estimators: * Î: Gaussian plug-in estimator with bias-correction (see <cit.>).* Î_G: Nonparanormal estimator using Gaussianization.* Î_ρ: Nonparanormal estimator using Spearman's ρ.* Î_τ: Nonparanormal estimator using Kendall's τ.* Î_kNN: Nonparametric estimator using k-nearest neighbor (kNN) statistics. For I_ρ and I_τ, we used a regularization constant z = 10^-3. We did not regularize for I_G. Although this implies [I_G = ∞] > 0, this is extremely unlikely for even moderate values of n and never occurred during our experiment, which all use n ≥ 32. We will thus omit denoting dependence on z. For I_kNN, except as noted in Experiment 3, k = 2, based on recent analysis <cit.> suggesting that small values of k are best for estimation.Sufficient details to reproduce experiments are given in the Appendix,and MATLAB source code is available at [Omitted for anonymity]. We report MSE based on 1000 i.i.d. trials of each condition. 95% confidence intervals were consistently smaller than plot markers and hence omitted to avoid cluttering plots. Except as specified otherwise, each experiment had the following basic structure: In each trial, a correlation matrix Σ was drawn by normalizing a random covariance matrix from a Wishart distribution, and data X_1,...,X_n i.i.d.∼𝒩(0, Σ) drawn. All 5 estimators were computed from X_1,...,X_n and squared error from true mutual information (computed from Σ) was recorded. Unless specified otherwise, n = 100 and D = 25.Since our nonparanormal information estimators are functions of ranks of the data, neither the true mutual information nor our non-paranormal estimators depend on the marginal transformations. Thus, except in Experiment 2, where we show the effects of transforming marginals, and Experiment 3, where we add outliers to the data, we perform all experiments on truly Gaussian data, with the understanding that this setting favors the Gaussian estimator.All experimental results are displayed in Figure <ref>.Experiment 1 (Dependence on n): We first show nonparanormal estimators have “parametric” O(n) dependence on n, unlike Î_kNN, which converges far more slowly. For large n, MSEs of Î_G, Î_ρ, and Î_τ are close to that of Î. Experiment 2 (Non-Gaussian Marginals): Next, we show nonparanormal estimators are robust to non-Gaussianity of the marginals, unlike Î. We applied a nonlinear transformation f to a fraction α∈ [0, 1] of dimensions of Gaussian data. That is, we drew Z_1,...,Z_n i.i.d.∼𝒩(0,Σ) and then used data X_1,...,X_n, whereX_i,j = {[ T(Z_i,j)j < α D;Z_i,j j ≥α D ]., ∀ i ∈ [n], j ∈ [D],for a diffeomorphism T. Here, we use T(z) = e^z. The Appendix shows similar results for several other T. Î performs poorly even when α is quite small. Poor performance of Î_kNN may be due to discontinuity of the density at x = 0.Experiment 3 (Outliers):We now show that nonparanormal estimators are far more robust to the presence of outliers than Î or Î_kNN. To do this, we added outliers to the data according to the method of <cit.>. After drawing Gaussian data, we independently select ⌊β n ⌋ samples in each dimension, and replace each i.i.d. uniformly at random from {-5,+5}. Performance of Î degrades rapidly even for small β. Î_kNN can fail for atomic distributions, Î_kNN = ∞ whenever at least k samples are identical. This mitigate this, we increased k to 20 and ignored trials where Î_kNN = ∞, but Î_kNN ceased to give any finite estimates when β was sufficiently large.For small values of β, nonparanormal estimators surprisingly improve. We hypothesize this is due to convexity of the mutual information functional Eq. (<ref>) in Σ. By Jensen's inequality, estimators which plug-in an approximately unbiased estimate Σ̂ of Σ are biased towards overestimating I. Adding random (uncorrelated) noise reduces estimated dependence, moving the estimate closer to the true value.If this nonlinearity is indeed a major source of bias, it may be possible to derive a von Mises-type bias correction (see <cit.>) accounting for higher-order terms in the Taylor expansion of the log-determinant. Experiment 4 (Dependence on Σ): Here, we verify our results in [sec:Sigma_lower_bound]Section <ref> showing that MSE of rank-based estimators approaches ∞ as |Σ| → 0, while MSE of Î is independent of Σ. Here, we set D = 2 and Σ as in Eq. (<ref>), varying σ∈ [0,1]. Indeed, the MSE of Î does not change, while the MSEs of Î_G, Î_ρ, and Î_τ all increase as σ→ 1. This increase seems mild in practice, with performance worse than of Î only when σ > 0.99. Î_τ appears to perform far better than Î_G and Î_ρ in this regime.Performance of I_kNN degrades far more quickly as σ→ 1. This phenomenon is explored by <cit.>, who lower bound error of I_kNN in the presence of strong dependencies, and proposed a correction to improve performance in this case.It is also interesting that errors of Î_ρ and Î_τ drop as σ→ 0. This is likely because, in this regime, the main source of error is the variance of ρ̂ and τ̂ (as -log(1 - σ^2) ≈σ^2 when σ≈ 0). When n →∞ and D is fixed, both 2sin(πρ̂/6) and sin(πτ̂/2) are asymptotically normal estimates of σ, with asymptotic variances proportional to (1 - σ^2)^2 <cit.>. By the delta method, since dI/dσ = σ/1 - σ^2, Î_ρ and Î_τ are asymptotically normal estimates of I, with asymptotic variances proportional to σ^2 and hence vanishing as σ→ 0. § ESTIMATING ENTROPY Thus far, we have discussed estimation of mutual information I(X). Mutual information is convenient because it is invariant under marginal transformation, and hence I(X) = I(f(X)) depends only on Σ. While the entropy H(X) does depend on the marginal transform f, fortunately, by Eq. (<ref>), H(X) differs from I(X) only by a sum of univariate entropies. Univariate nonparametric estimation of entropy in has been studied extensively, and there exist several estimators (e.g., based on sample spacings <cit.>, kernel density estimates <cit.> or k-nearest neighbor methods <cit.>) that can estimate H(X_j) at the rate ≍ n in MSE under relatively mild conditions on the marginal density p_j. While the precise assumptions vary with the choice of estimator, they are mainly (a) that p_j be lower bounded on its support or have particular (e.g., exponential) tail behavior, and (b) that p_j be smooth, typically quantified by a Hölder or Sobolev condition. Details of these assumptions are in the Appendix. Under these conditions, since there exist estimators Ĥ_1,...,Ĥ_D and a constant C > 0 such that[(Ĥ_j - H(X_j))^2] ≤ C/n, ∀ j ∈ [D].Combining these estimators with an estimator, say Î_ρ,z, of mutual information gives an estimator of entropy:Ĥ_ρ,z := ∑_j = 1^D Ĥ_j - Î_ρ,z.If we assume z = λ_D(Σ) is bounded below by a positive constant, combining inequality (<ref>) with Corollary <ref> gives[ ( Ĥ_ρ,z - H(X) )^2 ] ≤CD^2/n,where the constant C may differ from in (<ref>) but is independent of n and D. § CONCLUSIONS AND FUTURE WORKThis paper we suggests nonparanormal information estimation as a practical compromise between the difficult nonparametric case and the restrictive Gaussian case. We proposed three estimators for this problem, and provided the first upper bounds for nonparanormal information estimation. We also provided lower bounds showing how dependence on Σ differs from the Gaussian case, and we demonstrated empirically that nonparanormal estimators are more robust than Gaussian estimators, even when dimension is too high for fully nonparametric estimators.Collectively, these results suggest that, by scaling to moderate or high dimensionality without relying on Gaussianity, nonparanormal information estimators may be effective tools with a number of machine learning applications. While the best choice of information estimator inevitably depends on context, as a crude off-the-shelf guide for practitioners, the estimators we might suggest, in order of preference, are: * fully nonparametric if D < 6, n > max{100,10^D}.* Î_ρ if D^2/n is small and data may have outliers.* Î_τ if D^2/n is small and dependencies may be strong.* Î_G otherwise.* Î only given strong belief that data are nearly Gaussian. There are many natural open questions in this line of work. First, in the nonparanormal model, we focused on estimating mutual information I(X), which does not depend on marginal transforms f, and entropy, which decomposes into I(X) and 1-dimensional entropies. In both cases, additional structure imposed by the nonparanormal model allows estimation in higher dimensions than fully nonparametric models. Can nonparanormal assumptions lead to higher dimensional estimators for the many other useful nonlinear functionals of densities (e.g., L_p norms/distances and more general (e.g., Rényi or Tsallis) entropies, mutual informations, and divergences) that do not decompose?Second, there is a gap between our upper bound rate of Σ_2^2 D^2/n and the only known lower bound of 2D/n (from the Gaussian case), thought we also showed that bounds for rank-based estimators depend on Σ. Is quadratic dependence on D optimal? How much do rates improve under structural assumptions on Σ? Upper bounds should be derived for other estimators, such as Î_G and Î_τ. The 2D/n lower bound proof of <cit.> for the Gaussian case, based on the Cramer-Rao inequality <cit.>, is unlikely to tighten in the nonparanormal case, since Fisher information is invariant to diffeomorphisms of the data. Hence, a new approach is needed if the lower bound in the nonparanormal case is to be raised.Finally, our work also applies to estimating the log-determinant log|Σ| of the latent correlation matrix in a nonparanormal model. In addition to information estimation, the work of <cit.> on estimating log|Σ| in the Gaussian setting was motivated by the use of log|Σ| in several other multivariate statistical tools, includingquadratic discriminant analysis (QDA) andMANOVA <cit.>. Can our estimators lead to more robust nonparanormal versions of these procedures? icml2017§ LEMMASOur proofs rely on the following lemmas.(Convexity of the inverse operator norm): The function A ↦A_2 is convex over A ≻ 0.For A, B ≻ 0, let C := τ A + (1 - τ) B. Then,Ĉ_2 = 1/inf_x ∈^D x^T C x = 1/inf_x ∈^Dτ x^T A x + (1 - τ) x^T B x≤1/τinf_x ∈^D x^T A x + (1 - τ) inf_x ∈^D x^T B x≤τ1/inf_x ∈^D x^T A x + (1 - τ) 1/inf_x ∈^D x^T B x = τ A_2 + (1 - τ)B_2via convexity of the function x ↦ 1/x on (0, ∞).(Mean-Value Bound on the Log-Determinant): Matrix derivative of log-determinant. Suppose A, B ≻ 0. Then, for λ := min{λ_D(A), λ_D(B)},| log |A| - log |B| | ≤1/λA - B_F. Proof: First recall that the log-determinant is continuously differentiable over the strict positive definite cone, with ∇_X log |X| = X for any X ≻ 0. Hence, by the matrix-valued version of the mean value theorem,log |A| - log |B| = tr(C(A - B)),where C = τ A + (1 - τ) B for some τ∈ (0, 1). Since for positive definite matrices, the inner product can be bounded by the product of the operator and Frobenius norms, and clearly C ≻ 0, we have| log |A| - log |B| | = C_2A - B_F.Finally, it follows by Lemma <ref> that| log |A| - log |B| | ≤1/λA - B_F.§ PROOFS OF MAIN RESULTSHere, we give proofs of our main theoretical results, beginning with upper bounds on the MSE of Î_ρ and proceeding to minimax lower bounds in terms of Σ.§ UPPER BOUNDS ON THE MSE OF Î_Ρ | [ log |Σ̂_z| ] - log|Σ| | ≤ C ( Σ_2^2D/z^2n + ( ∑_λ_j(Σ) < zlog( z/λ_j(Σ)) )^2 ). By the triangle inequality,| [ log |Σ̂_z| ] - log|Σ| |≤| [ log |Σ̂_z| ] - log|Σ_z| |+ | log |Σ_z| - log |Σ| |For the first term, applying the matrix mean value theorem (Lemma <ref>) and the inequality A_F ≤√(D)A_2| [ log| Σ̂_z | ] - log |Σ_z| |≤[ | log| Σ̂_z | - log |Σ_z| | ] ≤1/z[ Σ̂_z - Σ_z _F ] ≤√(D)/z[ Σ̂_z - Σ_z _2 ] ≤C_MZΣ_2 D/z√(n),where we used Theorem 1 of <cit.>, which gives a constant C_MZ such that[ Σ̂_z - Σ_z _2 ] ≤ C_MZΣ_2 √(D/n).Via the bound Σ_2 ≤√(D)Σ_∞, this reduces to[ Σ̂_z - Σ_z _2 ] ≤ C_MZD/√(n). [ Î] ≤36 π^2 D^2/z^2 n.By the Efron-Stein inequality, since X_1,…,X_n are independent and identically distributed,[ Î]≤1/2∑_i = 1^n [ ( log |Σ̂_z| - log |Σ̂_z^(i)| )^2 ]= n/2[ ( log |Σ̂_z| - log |Σ̂_z^(1)| )^2 ],where Σ̂_z^(1) is our estimator after independently re-sampling the first sample X_1. Applying the multivariate mean-value theorem (Lemma <ref>), we have| log |Σ̂_z| - log |Σ̂_z^(1)| | ≤1/zΣ̂_z - Σ̂_z^(1)_F.Σ̂_τ_2 ≤1/z. Since 𝒮(z) is convex and the Frobenius norm is supported by an inner product, the operation of projecting onto 𝒮(z) is a contraction. In particular, ( Σ̂_z - Σ̂_z^(1)) _F ≤( Σ̂- Σ̂^(1)) _F Applying the mean value theorem to the function x ↦ 2sin( π/6 x ),( Σ̂- Σ̂^(1)) _F^2 = ∑_j,k = 1^D ( Σ̂- Σ̂^(1))_j,k^2 ≤π^2/9∑_j,k = 1^D ( ρ̂_j,k - ρ̂_j,k^(1))^2= π^2/9ρ̂- ρ̂^(1)_F^2.From the formulaρ̂_j,k = 1 - 6 ∑_i = 1^n d_i,j,k^2/n(n^2 - 1),(where d_i,j,k denotes the difference in ranks of X_i,j and X_i,k in X_1,j,...,X_n,j and X_1,k,...,Y_n,k, respectively), one can see, since |d_1,j,k - d_1,j,k'| ≤ n and, for i ≠ 1, |d_i,j,k - d_i,j,k'| ≤ 1, that| ρ̂_j,k - ρ̂_j,k^(1)| ≤18/n,and hence thatρ̂- ρ̂^(1)_F ≤18D/n.It follows from inequality (<ref>) thatΣ̂_z - Σ̂_z^(1)_F ≤6 π D/n.Altogether, this gives| log |Σ̂_z| - log |Σ̂_z^(1)| | ≤6 π D/z n.Then, McDiarmid's Inequality gives, for all > 0,[ | Î - [ Î] | > ] = 2 exp( - n z^2^2/18 π^2 D^2).This translates to a variance bound of[ Î] ≤36 π^2 D^2/z^2 n. §.§ Lower bound for rank-based estimators in terms of ΣOne (perhaps surprising) result of <cit.> is that, as long as D/n → 0, the convergence rate of the estimator is independent of the true correlation structure Σ. Here, we show that this desirable property does not hold in the nonparanormal case. Consider the 2-dimensional caseX_1,...,X_n i.i.d∼𝒩(0,Σ),with Σ = [ 1 σ; σ 1 ],and let σ_* ∈ (0,1). Suppose an estimator Î = Î(R) of I_σ = -1/2log(1 - σ^2) is a function of the empirical rank matrix R ∈^n × 2 of X (as defined in (<ref>)). Then, there exists a constant C > 0, depending only n, such that the worst-case MSE of Î over σ∈ (0,σ_*) satisfiessup_σ∈ (0,σ^*][ ( Î(R) - I_σ)^2 ]≥1/64( C - log(1 - σ_*^2) )^2 →∞ as σ_* → 1. Note that the rank matrix R can take only finitely many values. Let ℛ be the set of all (n!)^D possible rank matrices and let ℛ_1 ⊆ℛ be the set of rank matrices that are perfectly correlated. Then, as σ→ 1, [R ∈ℛ_1] → 1, so, in particular, we can pick σ_0 (depending only on n) such that, for all σ≥σ_0, [R ∈ℛ_1] ≥1/2. Since the data are i.i.d., all rank matrices in ℛ_1 have equal probability. It follows thatD_TV(_0||_1)= 1/2_0 - _1_1≤1/2.Finally, by Le Cam's Lemma (see, e.g., Section 2.3 of <cit.>),inf_Îsup_σ∈{σ_0,σ_1}[ ( Î - I_σ)^2 ] ≥(I_σ_* - I_σ_0)^2/8( 1 - D_TV(P_σ_0,P_σ_1) ) ≥(log(1 - σ_0^2) - log(1 - σ_*^2))^2/64§ DETAILS OF EXPERIMENTAL METHODS Here, we present details needed to reproduce our numerical simulations. Note that MATLAB source code for these experiments is available at [Omitted for anonymity.], including a single runnable script that performs all experiments and generates all figures presented in this paper. Specific details needed to reproduce experiments are given in the Appendix, In short, experiments report empirical mean squared errors based on 100 i.i.d. trials of each condition. We initially computed 95% confidence intervals, but these intervals were consistently smaller than marker sizes, so we omitted them to avoid cluttering plots. Except as specified otherwise, each experiment followed the same basic structure, as follows: In each trial, a random correlation matrix Σ∈ [-1,1]^D × D was drawn by normalizing a covariance matrix from a Wishart distribution W(I_D,D) with identity scale matrix and D degrees of freedom. Data X_1,...,X_n were then drawn i.i.d. from 𝒩(0, Σ). All estimators were applied to the same data. Unless specified otherwise, n = 100 and D = 25. §.§ Computational ConsiderationsIn general, the running time of all the nonparanormal estimators considered is O(D n log n + D^2 n + D^3) (i.e., O(Dn log n) to rank or Gaussianize the variables in each dimension, D^2n to compute the covariance matrix, and O(D^3) to compute the log-determinant). All log-determinants log|Σ| were computed by summing the logarithms of the diagonal of the Cholesky decomposition of Σ, as this is widely considered to be a fast and numerically stable approach. Note however that faster (O(D)-time) randomized algorithms <cit.> have been proposed to approximate the log-determinant).§ ADDITIONAL EXPERIMENTAL RESULTS Here, we present variants on the experiments presented in the main paper, which support but are not necessary for illustrating our conclusions.§.§ Effects of Other Marginal Transformations In [subsec:hat_I_nonrobust]Section <ref>, we showed that the Gaussian estimator Î is highly sensitive to failure of the Gaussian assumption for even a small fraction of marginals. Figure <ref>, illustrates this for the transformation x ↦exp(x), but we show here that this is not specific to the exponential transformation. As shown in Figures <ref> nearly identical results hold when the marginal transformation f is the hyperbolic tangent function x ↦tanh(x), the cubic function x ↦ x^3, sigmoid function x ↦1/1 + e^-x, or standard normal CDF. § SPECIFIC ASSUMPTIONS FOR ESTIMATING H(X) As shown in the main paper, to estimate the entropy of a nonparanormal distribution at the rate O(D^2/n), it suffices to the univariate entropy of each variable X_j at the rate O(1/n). To do this, additional assumptions are required on the marginal densities p_j. Here, we give detailed sufficient conditions for this.Letting S_j ⊆ denote the support of p_j, the two key assumptions can be roughly classified as follows:* 1/2-order smoothness[This is stronger than the 1/4-order smoothness mandated by the minimax rate for entropy estimation <cit.>, but appears necessary for most practical entropy estimators. See Section 4 of <cit.> for further details.]; e.g., a Hölder condition:0pt sup_x ≠ y ∈ S_j|p_j(x) - p_j(y)|/|x - y|^1/2< L,or a (slightly weaker) Sobolev condition:0pt ∫_S_j p_j^2(x) dx < ∞and ∫_S_j( |ξ|^1/2 |ℱ[ p_j ](ξ)| )^2 dξ < L,(where ℱ[ p_j ](ξ) denotes the Fourier transform of p_j evaluated at ξ) for some constant L > 0.* absolute bounds p_j(x) ∈ [κ_1,κ_2] for all x ∈ S_j or (a_j,b_j)-exponential tail boundsf(x)/exp(-a_j x^b_j)∈ [κ_1,κ_2]for allx ∈ S_jfor some κ_1, κ_2 ∈ (0, ∞). Under these assumptions, there are a variety of nonparametric univariate entropy estimators that have been shown to converge at the rate O(1/n) <cit.>.§ LOWER BOUNDING THE EIGENVALUES OF A BANDABLE MATRIXRecall that, for c ∈ (0,1), a matrix Σ∈^D × D is called c-bandable if there exists a constant c ∈ (0,1) such that, for all i,j ∈ D, |Σ_i,j| ≤ c^|i - j|.Here, we show simple bounds on the eigenvalues of a bandable correlation matrix Σ. While this result is fairly straightforward, a brief search the literature turned up no comparable results. <cit.>, who originally introduced the class of bandable covariance matrices, separately assumed the existence of lower and upper bounds on the eigenvalues to prove their results. In the context of information estimation, this results of particular interest because, when c < 1/3 it implies a dimension-free positive lower bound on the minimum eigenvalue of Σ, hence complementing our upper bound in Theorem <ref>. Suppose a symmetric matrix Σ∈^D × D is c-bandable and has identical diagonal entries Σ_j,j = 1. Then, the eigenvalues λ_1(Σ),...,λ_D(Σ) of Σ can be bounded as1 - 3c/1 - c≤λ_1(Σ),...,λ_D(Σ) ≤1 + c/1 - c.In particular, when c < 1/3, we have0 < 1 - 3c/1 - c≤λ_D(Σ). The proof is based on the Gershgorin circle theorem <cit.>. In the case of a real symmetric matrix Σ, this states that the eigenvalues of Σ lie within a union of intervals{λ_1(Σ),...,λ_D(Σ) }⊆⋃_j = 1^D [ Σ_j,j - R_j, Σ_j,j + R_j ],where R_j := ∑_k ≠ j |Σ_j,k| is the sum of the absolute values of the non-diagonal entries of the j^th row of Σ. In our case, since the diagonal entries of Σ are all Σ_j,j = 1, we simply have to boundmax_j ∈ [D] R_j ≤∑_k ≠ j c^|k - j|.This geometric sum is maximized when j = ⌈ D/2 ⌉, givingR_j≤ 2∑_δ = 1^⌊ D/2 ⌋ c^δ= 2c1 - c^⌊ D/2 ⌋/1 - c≤2c/1 - c.Finally, the inclusion (<ref>) givesλ_D(Σ) ≥ 1 - 2c/1 - c = 1 - 3c/1 - c > 0when c < 1/3. 1 + 2c/1 - c = 1 + c/1 - c.
http://arxiv.org/abs/1702.07803v1
{ "authors": [ "Shashank Singh", "Barnabás Pøczos" ], "categories": [ "math.ST", "cs.IT", "math.IT", "stat.ML", "stat.TH" ], "primary_category": "math.ST", "published": "20170224234306", "title": "Nonparanormal Information Estimation" }
Supported by the Austrian Science Fund (FWF) under grant S11409-N23. This article will appear in the proceedings of the 26th International Conference on Automated Deduction (CADE-26), LNCS, Springer, 2017. Amrit Singh Bedi, Student Member, IEEE and Ketan Rajawat, Member, IEEE The authors are with the Department of Electrical Engineering, IIT Kanpur, Kanpur (UP), India 208016 (email: amritbd, ketan@iitk.ac.in). ======================================================================================================================================================================================================================= We present the latest major releaseversion 6.0 of the quantified Boolean formula (QBF) solver , which is based on QCDCL. QCDCL is an extension of the conflict-driven clause learning (CDCL) paradigm implemented in state of the art propositional satisfiability (SAT) solvers.The Q-resolution calculus () is a QBF proof system which underlies QCDCL.QCDCL solvers can produce proofs of QBFs in prenex conjunctive normal form (PCNF) as a byproduct of the solving process.In contrast to traditional QCDCL based on , implements a variant of QCDCL which is based on a generalization of . This generalization is due to a set of additional axioms and leaves the original Q-resolution rules unchanged. The generalization of enables QCDCL to potentially produce exponentially shorter proofs than the traditional variant.We present an overview of the features implemented in and report on experimental results which demonstrate the effectiveness of generalized in QCDCL.§ INTRODUCTIONPropositional satisfiability (SAT) solvers based on conflict-driven clause learning (CDCL) <cit.> implementa combination of the DPLL algorithm <cit.> and propositional resolution <cit.> to derive learned clauses from a CNF to be solved.CDCL has been extended to solve quantified Boolean formulas (QBFs) <cit.>, resulting in the QCDCL approach <cit.>. The logic of QBFs allows for explicit universal and existential quantification of propositional variables. As a consequence, the satisfiability problem of QBFs is PSPACE-complete.In contrast to SAT solving, where CDCL is the dominant solving paradigm in practice, QCDCL is complemented by variable expansion <cit.>. This approach successively eliminates variables from a QBF until it reduces to either true or false. Many modern solvers (e.g. <cit.>) implement expansion by counter-example guided abstraction refinement (CEGAR) <cit.>.The Q-resolution calculus () <cit.> is a QBF proof system that underlies QCDCL in a way that is analogous to propositional resolution in CDCL. The empty clause is derivable from a PCNFby iffis unsatisfiable. According to QBF proof complexity, there is an exponential separation between the sizes of proofs that variable expansion and Q-resolution can produce for certain QBFs <cit.>. This theoretical result suggests to combine such orthogonal proof systems in QBF solvers to leverage their individual strengths.As a first step towards a solver framework that allows for the combination of QBF proof systems in a systematic way, we present the latest major release version 6.0 of the QCDCL solver .[is licensed under GPLv3: <http://lonsing.github.io/depqbf/>] In contrast to traditional QCDCL based on  <cit.>, implements a variant of QCDCL that relies on a generalization of . This generalization is due to a set of new axioms added to  <cit.>. In practice, derivations made by the added axioms in QCDCL are based on arbitrary QBF proof systems. As a consequence, when applying proof systems that are orthogonal to Q-resolution, the generalization of via the new axioms enables QCDCL as implemented in to potentially produce exponentially shorter proofs than traditional QCDCL. We report on experiments where we compare to state of the art QBF solvers. Experimental results demonstrate the effectiveness of generalized in QCDCL.Additionally, we briefly summarize the evolution of since the firstversion 0.1 <cit.>. We relate the features that were added to the different versions of over time to the enhanced variant of QCDCL implemented in . § PRELIMINARIESA QBF := . in prenex conjunctive normal form (PCNF)consists of a quantifier prefix := Q_1X_1 … Q_nX_n and a CNFnot containing tautological clauses.The CNFis defined over the propositional variables X_1 ∪…∪ X_n that appear in . The variable sets X_i are pairwise disjoint and Q_i ≠ Q_i+1 forQ_i ∈{∀, ∃}.QBFs := . in prenex disjunctive normal form (PDNF) are defined analogously to PCNFs, whereis a DNF consisting of cubes. A cube is a conjunction of literals.The quantifier l ofa literal l isQ_i if the variable l of l appears in X_i. If l = Q_i and k = Q_j, then ⁠l ≤_ k ⁠iff ⁠i ≤ j.An assignment A maps variables of a QBF . to truth values true (⊤) and false (). We represent A = {l_1,…,l_n}as a set of literals such that if a variable x is assigned true (false) then l_i ∈ A with l_i = x (l_i = x̅), where x̅ is the negation of x. Further, l_i≠l_j for any l_i, l_j ∈ A with i ≠ j. The PCNFunder assignment A,written as[A], is the PCNF obtained fromin whichfor all l ∈ A, all clauses containing l are removed,all occurrences of l̅ are deleted, and l isremoved from the prefix.If the CNF of [A] is empty (respectively, contains the empty clause ∅), thenit is satisfied (falsified) by A andA is a satisfying (falsifying) assignment, written as [A] = ⊤ ([A] =). A PDNFunder an assignment A and an empty cube are defined in a way dual to PCNFs and empty clauses.A QBF .with Q_1 = ∃ (Q_1 = ∀)is satisfiable iff, for x ∈ X_1,.[{x}] or (and) .[{x̅}] is satisfiable.Two QBFsand ' are satisfiability-equivalent('), iffis satisfiable whenever ' is satisfiable.§ QCDCL AND THE GENERALIZED Q-RESOLUTION CALCULUSIn the following, we present the variant of QCDCL implemented in that relies on a generalization of the Q-resolution calculus (). We illustrate the workflow of that variant in Fig. <ref>.In general, QCDCL is based on the successive generation of assignments that guide the application of the inference rules of to derive learned clauses and cubes from a given input PCNF = .. Learned cubes are dual toclauses. While learned clauses represent assignments that falsify the CNFof ψ, learned cubes represent assignments that satisfy .The empty cube is derivable from a PCNFby iffis satisfiable.Based on our presentation of the rules of we illustrate the differences between traditional QCDCL and the variant implemented in .A QCDCL solver maintains a PCNF = .' (PDNF = .') consisting of a CNF ' (DNF ') of learned clauses (cubes).The clauses inare added conjunctively toto obtain _ = .(∧ (⋀_C ∈' C)), and the cubes inare added disjunctively toto obtain _ = .(∨ (⋁_C ∈' C)).It holds that _ and _.Initially the current assignment A, the PCNF , and PDNFare empty.We use the notation C, C', and C_L for both clauses and cubes.During propagation, the formulas _ and _ are first simplified under the current assignment A by computing _[A] and _[A].Then universal and existential reduction is applied to _[A] and to _[A] based on the following inference rule.Let = . be a PCNF. 𝑟𝑒𝑑C ∪{l}C0.7625 (1) C is a clause, l = ∀,l' <_ lfor all l' ∈ C with l' = ∃ or(2) C is a cube, l = ∃,l' <_ lfor all l' ∈ C with l' = ∀Universal (existential) reduction of clauses (cubes) by rule <ref> eliminates trailing universal (existential) literals from a clause (cube)with respect to the linear quantifier ordering in the prefix of the PCNF .We write (C) = C' ((C) = C') to denote the clause (cube) C' resulting from clause (cube) C by fully reducing universal (existential) literals.Let _' and _' denote the formulas obtained by applying universal (existential) reduction to all the clauses (cubes) in _[A] (_[A]) until saturation.New assignments are generated by unit literal detection with respect to _' and _'.If a PCNF (PDNF)contains a unit clause (cube) C = (l), where l = ∃ (l = ∀), then literal l is unit and [A'] where A' = {l} (A' = {l̅}).Assignment A is extended by assignments A' derived from unit clauses (cubes) in _' (_'). For every unit clause (cube) C' ∈_' (C' ∈_') with C' = (l),the corresponding assignment A' := {l} (A' := {l̅}) is recorded.After propagation, in conflict/solution detection it is checked whether _' is unsatisfiable or whether _' is satisfiable (only one of the two cases can occur). To this end, incomplete methods are applied. In traditional QCDCL, for example, it is syntactically checked if the current assignment A is falsifying or satisfying, i.e., whether _' contains the empty clause (i.e., _' =) or whether _' contains the empty cube (i.e., _' = ⊤).In , we extend these incomplete syntactic checks to incomplete semantic checks based on arbitrary QBF decision procedures (proof systems)that are applied to _' and _' in a resource bounded ⁠way.If neither _' is found unsatisfiable nor _'is found satisfiable by the incomplete satisfiability checks, then in decision making A is extended by heuristically assigning some decision variable x from the leftmost quantifier block of [A] (A := A ∪{l} where l = x),and propagation continues. Assignments by decision making must follow the prefix ordering of , in contrast to assignments by propagation (unit literals), which results in assignments of the following kind. Assignments generated by decision making and propagation in QCDCL are called QCDCL assignments. If _' (_') is found unsatisfiable (satisfiable) in conflict/solution detection then a learned clause (cube) is derived using depending on the incomplete satisfiability checks.In traditional QCDCL, conflict/solution detection relies only onfalsifying or satisfying assignments. If _' = then _' contains an empty clause C' = ∅ such that there is a clause C ∈_ with C' = (C[A]).Clause C is the falsified clause with respect to assignment A. If C appears in the given PCNFthen in traditional it is derived trivially by the following axiom.Let = . be a PCNF.cl-initAC0.79C is a clause and C ∈If _' ≠ but _' = ⊤ then either (1) _' contains an empty learned cube C' = ∅such that there is a cube C ∈_ with C' = (C[A]), or (2) A is a satisfying assignment that satisfies all clauses in _'. For case (2), a cube C is derived by the following axiom of traditional (in either case (1) or (2) cube C is the satisfied cube with respect to A).Let = . be a PCNF.cu-initAC0.79A is an assignment, [A] = ⊤,and C = (⋀_l ∈ A l) is a cubesupports the application of arbitrary (incomplete) QBF decision procedures (proof systems) in conflict/solution detection and thus generalizes the syntactic checks for falsifying and satisfying assignmentsin traditional QCDCL. To check the satisfiability of _', in we apply a dynamic variant of blocked clause elimination (QBCE) <cit.>. This approach was introduced in version 5.0 of . QBCE has been presented as a preprocessing technique to eliminate redundant blocked clauses <cit.> from a PCNF. If all clauses in _' are satisfied under A or identified as blocked, then _' is determined satisfiable. In our implementation applications of QBCE are tightly integrated in the propagation phase via efficient data structures. Clauses that are blocked are temporarily considered as removed from the formula. Hence such clauses cannot be used to detect unit clauses or empty clauses during propagation.In addition to dynamic QBCE, we implemented incomplete QBF satisfiability checks based on propositional abstractions of _' and _' <cit.>, which are solved using an integrated SAT solver. These abstractions are constructed by treating universally quantified literals in the given PCNFin a special way. Propositional abstractions and SAT solving leverage the benefits of techniques like trivial truth and trivial falsity presented already in early search-based QBF solvers <cit.>. Additionally, the power of QU-resolution <cit.>, which is exponentially stronger than Q-resolution <cit.> but has not been applied systematically in QCDCL, is harnessed to a certain extent (cf. Example 3 in <cit.>).As a simple way of applying a QBF decision procedure that is incomplete by its nature we integrated the preprocessor  <cit.> in .Preprocessing aims at simplifying a formula within a restricted amount of time but might already solve certain formulas(cf. <cit.>). Among several techniques, applies bounded expansion of universally quantified variables <cit.>. Hence by integrating in QCDCL we in fact integrate expansion, a QBF proof system that is orthogonal to Q-resolution <cit.>.Due to usability issues, in the follow-up release version 6.02 of we replaced by the expansion based QBF solver ,[<https://github.com/lonsing/nenofex>] which is applied in a resource bounded way.If _' (_') is found unsatisfiable (satisfiable) by an incomplete decision procedure but, unlike above, A is neither falsifying nor satisfying, then a clause (cube) is derived by the following generalized axioms of . These axioms are added to and applied in addition to the traditional axioms <ref> and <ref>.Let = . be a PCNF.gen-cl-initAC0.7A is a QCDCL assignment,[A] is unsatisfiable, and C = (⋁_l ∈ Al̅) is a clause gen-cu-initAC0.7A is a QCDCL assignment,[A] is satisfiable, and C = (⋀_l ∈ A l) is a cube Note that the generalized axioms allow to derive clauses and cubes that cannot be derived by the traditional axioms <ref> and <ref> in general. This is due to the application of arbitrary QBF decision procedures (proof systems) for satisfiability checking in conflict/solution detection or in the side conditions of the axioms, respectively. In the side conditions the satisfiability of the PCNF [A] is checked, in contrast to formulas _' and _' as in conflict/solution detection. This is possible since _' [A] and _' [A]. The clause (cube) C derived by applying the generalized clause axiom <ref> (<ref>) is the falsified clause (satisfied cube) with respect to A.During clause (cube) learning, a new learned clause (cube) C_L is derived by . The falsified clause (satisfied cube) C is the start clause (cube) of a derivation of C_L. Given A,clauses (cubes) which became unit during propagation are systematically resolved based on the following Q-resolution rule.Let = . be a PCNF. 𝑟𝑒𝑠C_1 ∪{p}C_2 ∪{p̅}C_1 ∪ C_20.57 For all x ∈{x, x̅}⊈(C_1 ∪ C_2), p̅∉C_1, p ∉C_2, and either(1) C_1, C_2 are clauses and p = ∃or (2) C_1, C_2 are cubes and p = ∀ Rule <ref> does not allow the resolvent (C_1 ∪ C_2) to be a tautological clause (contradictory cube) and requires existential (universal) variables as pivots p.In general, learning produces a nonempty clause (cube) C_L≠∅, which is added to the PCNF(PDNF ) of learned clauses (cubes), and hencealsoto_ (_).In backtracking, a certain subassignment A' ⊂ A is retracted such that C_L becomes unit in propagation. C_L is called an asserting clause (cube) <cit.>.Clauses (cubes) derived by rules <ref> and <ref> (<ref> and <ref>) are used in exactly the same way in learning to produce asserting clauses (cubes).QCDCL terminates (“UNSAT” or “SAT” in Fig. <ref>) by deriving the empty learned clause (cube) C_L = ∅. A clause (cube) resolution proof of the unsatisfiability (satisfiability) ofcan be obtained from the derivations of the learned clauses (cubes) up to the empty clause (cube).By applying the generalized axioms using a complete QBF decision procedure, the empty assignment A, and an unlimited amount of time,the empty clause (cube) can be derived right away from any given unsatisfiable (satisfiable) PCNF .In practice it is crucial to apply incomplete polynomial time procedures to limit the time spent on the satisfiability checks. However, the costs of frequent checks may outweigh the benefits. Hence in , satisfiability checks for applications of the generalized axioms are dynamically disabled if they turn out to be too costly, and the traditional axioms are used instead. We refer to related work for implementation details <cit.>. § FEATURES OF We briefly summarize the general features of that have been incorporated since its initial version 0.1 <cit.>. Most features were described in related publications. Additionally, we comment on the compatibility of the features with the implementation of with generalized axioms (Fig. <ref>) in . Dependency Schemes.Since the initial version 0.1, has been equipped with the standard dependency scheme <cit.> to relax the linear quantifier ordering in the prefix of a given PCNF . In general, dependency schemes are used to compute dependency relations D, which are binary relations over the set of variables in . If (x,y) ∉D for two variables x and y then the ordering of x and y incan safely be swapped. Otherwise, if (x,y) ∈ D then y is considered to depend on x. The integration of dependency schemes in QCDCL results in the following reduction rule, which is added to and implemented in . Let = . be a PCNF and D be a dependency relation computed using a dependency scheme.dep-redC ∪{l}C0.70 (1) C is a clause, l = ∀,(l,l') ∉D for all l' ∈ C with l' = ∃ or(2) C is a cube, l = ∃,(l,l') ∉D for all l' ∈ C with l' = ∀ Rule <ref> generalizes the traditional reduction rule <ref> by the use of dependency relation instead of the linear ordering of variables (≤_) in the prefix of PCNF . This way, it might be possible to reduce literals by rule <ref> which cannot be reduced by rule <ref>. The soundness of with rule <ref> has been proved for a dependency relation that is even more general (and thus allows for additional reductions) than the one implemented in  <cit.>. The generalized axioms <ref> and <ref> of implemented in are naturally compatible with rule <ref>. Additionally, dependency schemes enable a relaxed variant of QCDCL assignments (Definition <ref>) based on the respective dependency relation rather than the prefix ordering of a PCNF . Long-Distance Resolution. The Q-resolution rule <ref> <cit.> explicitly disallows to generate clauses (cubes) that are tautological (contradictory). This restriction is relaxed under certain side conditions in long-distance (LD) Q-resolution <cit.>. LDQ-resolution was first implemented in the QCDCL solver  <cit.> and was incorporated in version 3.0 of . Compared to with traditional Q-resolution <ref> <cit.>, with LDQ-resolution is exponentially more powerful in terms of proof sizes <cit.>. The generalized axioms <ref> and <ref> implemented in are not only compatible with the LDQ-resolution rule, but with any variants of Q-resolution (cf. <cit.>). Recently, the soundness of the combination of LDQ-resolution of clauses and dependency schemes in has been proved <cit.>, leaving the soundness of cube resolutions as an open problem. Therefore, the combination of LDQ-resolution and dependency schemes is not supported in .Incremental Solving. Since version 3.0, has been equipped with an API in C and Java for incremental solving of sequences S := ⟨_0, …, _n ⟩ of syntactically related PCNFs _i <cit.>. Incremental solving aims at reusing the clauses and cubes that were learned when solving PCNF _i when it comes to solve the PCNFs _j with i < j.The API of allows to modify the PCNFs in S by manipulating the quantifier prefix and adding or removing sets of clauses in a stack-based way. Since version 4.0, it is possible to add or remove sets of clauses arbitrarily <cit.>and to extract unsatisfiable cores,i.e., unsatisfiable subformulas of the PCNF _i.At any time when solving _i ∈ S, the soundness property of QCDCL (Section <ref>) that _ and _, where = _i, must hold. To guarantee that property when using the generalized axioms for incremental solving, currently only applies the generalized cube axiom <ref> with dynamic QBCE used to check satisfiability of _' in conflict/solution detection (Fig. <ref>). Although this configuration restricts the power of the generalized axioms, it has improved incremental solving in the context of QBF-based conformant planning <cit.>. As it is unclear how to use dependency schemes effectively in incremental solving, their application is disabled in . Generation of Proofs and Certificates. QCDCL solvers can produce clause (cube) resolution proofs of the unsatisfiability (satisfiability) of PCNFs as a byproduct of clause (cube) learning. Since version 1.0 <cit.>, is capable of producing proofs without employing dependency schemes by rule <ref>. Given a proof P of a PCNF , a certificate ofcan be extracted from P by inspecting the reduction steps by rule <ref> in P <cit.>. A certificate of an unsatisfiable (satisfiable) PCNFis given by a set of Herbrand (Skolem) functions which represent the universal (existential) variables in . Applications of the generalized axioms in QCDCL in general impose considerable restrictions on the certificate extraction process. The workflow <cit.> to extract a certificate from P was originally presented for traditional proofs. If proof P contains clauses (cubes) derived by rule <ref> (rule <ref>), then P may lack information needed to extract correct certificates. As a result, does not support cube resolution proof generation combined with the generalized cube axiom <ref>. However, it supports clause resolution proof generation with the generalized clause axiom <ref> provided that only propositional abstractions and SAT solving are used for satisfiability checking in the side condition of this axiom. Advanced Generation of Learned Clauses and Cubes. The derivation of a single asserting clause (cube) starting from a falsified clause (satisfied cube) as implemented in traditional QCDCL <cit.> has an exponential worst case <cit.>. Since version 2.0 comes with an approach that avoids this exponential case <cit.> by a revised selection of clauses (cubes) to be resolved in learning. This advanced approach is compatible with all the techniques presented above. § EXPERIMENTSWe compare variants of , which is the latest follow-up release of , to top performing solvers ofQBFEVAL'16 <cit.>. As benchmarks we consider all 825 instances from the PCNF track, both in original form (Table <ref>) and preprocessed by version 37 (Table <ref>).We take preprocessing into account as it might have a positive impact on certain solvers while a negative on others (cf. <cit.>).Experiments were run on an AMD Opteron 6238 processor (2.6 GHz) under 64-bit Ubuntu Linux 12.04 with time and memory limits of 1800 seconds and seven GB. Exceeding the memory limit is counted as a time out.[We refer to the appendix of this paper with additional experimental results.]To assess the impact of the generalized axioms <ref> and <ref> on the performance, we consider using both <ref> and <ref> (variant in the tables), without <ref> (), without <ref> (), and using no generalized axioms at all (). On original instances (Table <ref>), outperforms variants , , and with restricted or withoutgeneralized axioms, respectively. Variant without axiom <ref> outperforms variant without <ref>. We attribute this effect to the use of dynamic QBCE (among other techniques) for applications of the cube axiom <ref> in .Compared to , disabling only dynamic QBCE in variant severely impacts performance.On preprocessed instances (Table <ref>), we make similar observations regarding the impact of the generalized axioms like in Table <ref>.However, variant without the clause axiom <ref> is on par with . Preprocessing may blur the structure of an instance. We conjecture that this blurring hinders the success of the QBF decision procedures in , on which applications of the generalized axioms are based.In general the performance difference between the variants of is smaller than on original instances. The rankings of the solvers  <cit.>,  <cit.>, and  <cit.> are improved substantially by preprocessing, whereas those of  <cit.> and  <cit.> become worse. The best variant in Table <ref> ranks fourth behind  <cit.>, , and . However, the lag to the solver ranked third is 19 instances compared to 120 instances for the best variant in Table <ref> that also ranks fourth.To analyze the effects of preprocessing in more detail, we filtered the 825 PCNFs from QBFEVAL'16 by discarding 354 PCNFs that are already solved by and 69 PCNFs where eliminated all universally quantified variables, resulting in a set of 402 PCNFs. Further, we considered the 402 PCNFs in their original form and preprocessed by and partitioned them into subsets containing PCNFs with at most two and with three or more quantifier alternations. Such partitioning is motivated by a related experimental study <cit.> where a large diversity of solver performance was observed on instance classes defined by alternations. Tables <ref> and <ref> show solver performance on these subsets without and with preprocessing, respectively. Notably, variants of outperform the other solvers on the subsets with three or more alternations, both without and with preprocessing (Tables <ref> and <ref>). All variants of reported above apply dependency-aware reduction by rule <ref>.Variant is the same as (including generalized axioms) but uses the traditional reduction rule <ref> based on the linear quantifier ordering of PCNFs.Variant outperforms in all tables except Table <ref>, where is on par, which illustrates the benefits of dependency schemes in QCDCL. Variant differs from in the use of LDQ-resolution in learning instead of traditional Q-resolution by rule <ref>. The results with LDQ-resolution are mixed, despite being a stronger proof system than Q-resolution. Variant outperforms in all tables except Tables <ref> and <ref>, i.e., on instances with at most two quantifier alternations. § CONCLUSION We presented the latest major releaseversion 6.0 of the QCDCL solver . implements a variant of QCDCL that is based on a generalization of the Q-resolution calculus (). The generalization is achieved by equipping with generalized clause and cube axioms to be used inclause and cube learning <cit.>. The generalized axioms provide an extensible framework ofinterfaces for the integration of arbitrary QBF proof systems in , and hence in QCDCL. The integration of proof systems orthogonal to Q-resolution, such as variable expansion, enables QCDCL to potentially produce proofs that are exponentially shorter than proofs produced by traditional QCDCL. This way, the state of the artof QCDCL solving can be further advanced. A related open problem is the inability of plain QCDCL to exploit the full power of Q-resolution <cit.>.The workflow of QCDCL with generalized axioms is not tailored towards but can be implemented in any QCDCL solver. Furthermore, it is compatible with dependency schemes <cit.> and any Q-resolution variant <cit.>, which offers potential for further improvements.Experiments with variants of showed considerable performance gains due to the application of generalized axioms. However, frequent applications are hindered by computationally expensive QBF satisfiability checks in the side conditions of the axioms. To limit the checking overhead, axiom applications must be carefully scheduled. In this respect, there is room for improvements in fine tuning . Further, it may be beneficialto integrate the QBF decision procedures that are applied to satisfiability checking more tightly in the QCDCL workflow, like with dynamic blocked clause elimination (QBCE) <cit.>.10DBLP:conf/fmcad/AyariB02 Ayari, A., Basin, D.A.: QUBOS: Deciding Quantified Boolean Logic Using Propositional Satisfiability Solvers. In: FMCAD. LNCS, vol. 2517, pp. 187–201. Springer (2002)DBLP:journals/fmsd/BalabanovJ12 Balabanov, V., Jiang, J.R.: Unified QBF Certification and its Applications. Formal Methods in System Design41(1),45–65 (2012)DBLP:conf/sat/BalabanovWJ14 Balabanov, V., Widl, M., Jiang, J.R.: QBF Resolution Systems and Their Proof Complexities. In: SAT. LNCS, vol. 8561, pp. 154–169. Springer (2014)DBLP:conf/cp/BeyersdorffB16 Beyersdorff, O., Blinkhorn, J.: Dependency Schemes in QBF Calculi: Semantics and Soundness. In: CP. LNCS, vol. 9892, pp. 96–112. Springer (2016)beyersdorff_et_al:LIPIcs:2015:4905 Beyersdorff, O., Chew, L., Janota, M.: Proof Complexity of Resolution-based QBF Calculi. In: STACS. LIPIcs, vol. 30, pp. 76–89. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik (2015)DBLP:conf/sat/Biere04a Biere, A.: Resolve and Expand. In: SAT. LNCS, vol. 3542, pp. 59–70. Springer (2004)DBLP:conf/sat/0001JT16 Bogaerts, B., Janhunen, T., Tasharrofi, S.: SAT-to-SAT in QBFEval 2016. In: QBF Workshop. CEUR Workshop Proceedings, vol. 1719, pp. 63–70. CEUR-WS.org (2016)DBLP:conf/sat/BubeckB07 Bubeck, U., Kleine Büning, H.: Bounded Universal Expansion for Preprocessing QBF. In: SAT. LNCS, vol. 4501, pp. 244–257. Springer (2007)DBLP:conf/aaai/CadoliGS98 Cadoli, M., Giovanardi, A., Schaerf, M.: An Algorithm to Evaluate Quantified Boolean Formulae. In: AAAI. pp. 262–267. AAAI Press / The MIT Press (1998)DBLP:journals/jacm/ClarkeGJLV03 Clarke, E.M., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-Guided Abstraction Refinement for Symbolic Model Checking. J. ACM50(5), 752–794 (2003)DBLP:journals/cacm/DavisLL62 Davis, M., Logemann, G., Loveland, D.W.: A Machine Program for Theorem-Proving. Commun. ACM5(7),394–397 (1962)DBLP:journals/amai/EglyKLP17 Egly, U., Kronegger, M., Lonsing, F., Pfandler, A.: Conformant Planning as a Case Study of Incremental QBF Solving. Ann. Math. Artif. Intell.80(1), 21–45 (2017)DBLP:conf/lpar/EglyLW13 Egly, U., Lonsing, F., Widl, M.: Long-Distance Resolution: Proof Generation and Strategy Extraction in Search-Based QBF Solving. In: LPAR. LNCS, vol. 8312, pp. 291–308. Springer (2013)DBLP:journals/jair/GiunchigliaNT06 Giunchiglia, E., Narizzano, M., Tacchella, A.: Clause/Term Resolution and Learning in the Evaluation of Quantified Boolean Formulas. JAIR26, 371–416 (2006)DBLP:journals/jair/HeuleJLSB15 Heule, M., Järvisalo, M., Lonsing, F., Seidl, M., Biere, A.: Clause Elimination for SAT and QSAT. JAIR53,127–168 (2015)DBLP:conf/sat/Janota16 Janota, M.: On Q-Resolution and CDCL QBF Solving. In: SAT. LNCS, vol. 9710, pp. 402–418. Springer (2016)Janota20161 Janota, M., Klieber, W., Marques-Silva, J., Clarke, E.: Solving QBF with Counterexample Guided Refinement. Artif. Intell.234,1–25 (2016)DBLP:journals/tcs/JanotaM15 Janota, M., Marques-Silva, J.: Expansion-Based QBF Solving versus Q-Resolution. Theor. Comput. Sci.577,25–42 (2015)DBLP:conf/ijcai/JanotaM15 Janota, M., Marques-Silva, J.: Solving QBF by Clause Selection. In: IJCAI. pp. 325–331. AAAI Press (2015)DBLP:series/faia/BuningB09 Kleine Büning, H., Bubeck, U.: Theory of Quantified Boolean Formulas. In: Handbook of Satisfiability, FAIA, vol. 185, pp. 735–760. IOS Press (2009)DBLP:journals/iandc/BuningKF95 Kleine Büning, H., Karpinski, M., Flögel, A.: Resolution for Quantified Boolean Formulas. Inf. Comput.117(1),12–18 (1995)DBLP:conf/sat/KlieberSGC10 Klieber, W., Sapra, S., Gao, S., Clarke, E.M.: A Non-Prenex, Non-Clausal QBF Solver with Game-State Learning. In: SAT. LNCS, vol. 6175, pp. 128–142. Springer (2010)DBLP:journals/dam/Kullmann99 Kullmann, O.: On a Generalization of Extended Resolution. Discrete Applied Mathematics96-97,149–176 (1999)DBLP:conf/tableaux/Letz02 Letz, R.: Lemma and Model Caching in Decision Procedures for Quantified Boolean Formulas. In: TABLEAUX. LNCS, vol. 2381, pp. 160–175. Springer (2002)DBLP:conf/lpar/LonsingBBES15 Lonsing, F., Bacchus, F., Biere, A., Egly, U., Seidl, M.: Enhancing Search-Based QBF Solving by Dynamic Blocked Clause Elimination. In: LPAR. LNCS, vol. 9450, pp. 418–433. Springer (2015)DBLP:journals/jsat/LonsingB10 Lonsing, F., Biere, A.: DepQBF: A Dependency-Aware QBF Solver. JSAT 7(2-3),71–76 (2010)DBLP:conf/sat/LonsingB10 Lonsing, F., Biere, A.: Integrating Dependency Schemes in Search-Based QBF Solvers. In: SAT. LNCS, vol. 6175, pp. 158–171. Springer (2010)DBLP:conf/cp/LonsingE14 Lonsing, F., Egly, U.: Incremental QBF Solving. In: CP. LNCS, vol. 8656, pp. 514–530. Springer (2014)DBLP:conf/sat/LonsingE15 Lonsing, F., Egly, U.: Incrementally Computing Minimal Unsatisfiable Cores of QBFs via a Clause Group Solver API. In: SAT. LNCS, vol. 9340, pp. 191–198. Springer (2015)DBLP:journals/corr/LonsingE17 Lonsing, F., Egly, U.: Evaluating QBF Solvers: Quantifier Alternations Matter. CoRRabs/1701.06612 (2017), <http://arxiv.org/abs/1701.06612>, technical reportDBLP:conf/sat/LonsingES16 Lonsing, F., Egly, U., Seidl, M.: Q-Resolution with Generalized Axioms. In: SAT. LNCS, vol. 9710, pp. 435–452. Springer (2016)DBLP:conf/sat/LonsingEG13 Lonsing, F., Egly, U., Van Gelder, A.: Efficient Clause Learning for Quantified Boolean Formulas via QBF Pseudo Unit Propagation. In: SAT. LNCS, vol. 7962, pp. 100–115. Springer (2013)Lonsing201692 Lonsing, F., Seidl, M., Van Gelder, A.: The QBF Gallery: Behind the Scenes. Artif. Intell.237,92–114 (2016)DBLP:conf/date/MarinMLB12 Marin, P., Miller, C., Lewis, M.D.T., Becker, B.: Verification of Partial Designs Using Incremental QBF Solving. In: DATE. pp. 623–628. IEEE (2012)DBLP:journals/fuin/MarinNPTG16 Marin, P., Narizzano, M., Pulina, L., Tacchella, A., Giunchiglia, E.: Twelve Years of QBF Evaluations: QSAT Is PSPACE-Hard and It Shows. Fundam. Inform. 149(1-2),133–158 (2016)DBLP:conf/sat/NiemetzPLSB12 Niemetz, A., Preiner, M., Lonsing, F., Seidl, M., Biere, A.: Resolution-Based Certificate Extraction for QBF - (Tool Presentation). In: SAT. LNCS, vol. 7317, pp. 430–435. Springer (2012)DBLP:conf/sat/PeitlSS16 Peitl, T., Slivovsky, F., Szeider, S.: Long Distance Q-Resolution with Dependency Schemes. In: SAT. LNCS, vol. 9710, pp. 500–518. Springer (2016)DBLP:conf/sat/Pulina16 Pulina, L.: The Ninth QBF Solvers Evaluation - Preliminary Report. In: Proceedings of the 4th International Workshop on Quantified Boolean Formulas QBF 2016. CEUR Workshop Proceedings, vol. 1719, pp. 1–13. CEUR-WS.org (2016)DBLP:conf/fmcad/RabeT15 Rabe, M.N., Tentrup, L.: CAQE: A Certifying QBF Solver. In: FMCAD. pp. 136–143. IEEE (2015)DBLP:journals/jacm/Robinson65 Robinson, J.A.: A Machine-Oriented Logic Based on the Resolution Principle. J. ACM12(1),23–41 (1965)DBLP:journals/jar/SamerS09 Samer, M., Szeider, S.: Backdoor Sets of Quantified Boolean Formulas. JAR 42(1),77–97 (2009)DBLP:conf/sat/SchollP16 Scholl, C., Pigorsch, F.: The QBF Solver AIGSolve. In: QBF Workshop. CEUR Workshop Proceedings, vol. 1719, pp. 55–62. CEUR-WS.org (2016)DBLP:series/faia/SilvaLM09 Silva, J.P.M., Lynce, I., Malik, S.: Conflict-Driven Clause Learning SAT Solvers. In: Handbook of Satisfiability, FAIA, vol. 185, pp. 131–153. IOS Press (2009)DBLP:conf/sat/SlivovskyS12 Slivovsky, F., Szeider, S.: Computing Resolution-Path Dependencies in Linear Time. In: SAT. LNCS, vol. 7317, pp. 58–71. Springer (2012)DBLP:journals/tcs/SlivovskyS16 Slivovsky, F., Szeider, S.: Soundness of Q-Resolution with Dependency Schemes. Theor. Comput. Sci.612,83–101 (2016)DBLP:conf/cp/Gelder11 Van Gelder, A.: Variable Independence and Resolution Paths for Quantified Boolean Formulas. In: CP. LNCS, vol. 6876, pp. 789–803. Springer (2011)DBLP:conf/cp/Gelder12 Van Gelder, A.: Contributions to the Theory of Practical Quantified Boolean Formula Solving. In: CP. LNCS, vol. 7514, pp. 647–663. Springer (2012)DBLP:conf/iccad/ZhangM02 Zhang, L., Malik, S.: Conflict Driven Learning in a Quantified Boolean Satisfiability Solver. In: ICCAD. pp. 442–449. ACM / IEEE Computer Society (2002)DBLP:conf/cp/ZhangM02 Zhang, L., Malik, S.: Towards a Symmetric Treatment of Satisfaction and Conflicts in Quantified Boolean Formula Evaluation. In: CP. LNCS, vol. 2470, pp. 200–215. Springer (2002)§ ADDITIONAL EXPERIMENTAL DATA
http://arxiv.org/abs/1702.08256v2
{ "authors": [ "Florian Lonsing", "Uwe Egly" ], "categories": [ "cs.LO" ], "primary_category": "cs.LO", "published": "20170227124233", "title": "DepQBF 6.0: A Search-Based QBF Solver Beyond Traditional QCDCL" }
empty Exact Random Coding Exponents and Universal Decoders for the Asymmetric Broadcast ChannelRan Averbuch and Neri MerhavDecember 30, 2023 ============================================================================================The Andrew & Erna Viterbi Faculty of Electrical Engineering Technion - Israel Institute of Technology Technion City, Haifa 3200004, ISRAEL {rans@campus, merhav@ee}.technion.ac.ilThis work contains two main contributions concerning the asymmetric broadcast channel. The first is an analysis ofthe exact random coding error exponents for both users, and the second is the derivation of universal decoders for both users. These universal decoders are certain variants of the maximum mutual information (MMI) universal decoder, which achieve the corresponding random coding exponents of optimal decoding. In addition, we introduce some lower bounds,which involve optimization over very few parameters,unlike the original, exact exponents, which involve minimizations over auxiliary probability distributions. Numerical results for the binary symmetric broadcast channel show improvements overpreviously derived error exponents for the same model.Index Terms:Error exponent, asymmetric broadcast channel, universal decoding, MMI. § INTRODUCTION One of the most elementary system configuation models in multi-user information theoryis the broadcast channel (BC). It has been introduced in theearly seventies of the twentieth century by Cover <cit.>,and since then, a vast amount of papers and books, studying different topics ofthe broadcast problem, have been published.Generally speaking, the BC is a communication model,where a single transmitter wishesto communicate different messages to two or more receivers.The various messages may be private (i.e., aimed to one receiver only)or common (i.e., aimed to two or more receivers). Although the characterization of the capacity region of the general BC is stillan open problem, some special cases have been solved, most notably, the degraded BC (DBC), first presented in <cit.>. The capacity region of the DBC, conjectured by Cover, was first provedto be achievable by Bergmans <cit.>,and the converse was established by Bergmans<cit.> and Gallager <cit.>. Another special case, which is somewhat more general than the DBC and which was firstintroduced and solved by Körner and Marton <cit.>, is the broadcast channel with degraded message sets, also known as the asymmetric broadcast channel (ABC). The direct part of their coding theorem relys on Bergmans' scheme, which suggested the use of an hierarchical random code:First generate “cloud centers",which designate messagesintended to both the receiver with the relatively high channel quality,henceforth referred to as the strong user, and the receiver with the relatively low channel quality, henceforth referred to as the weak user. Then, in the second step, “around" each cloud center, generate a codeword for each messagethat is intended to the strong user only. The transmitter sends a codeword pertaining to one of the clouds.The strong decoderfully decodes both the common message and his private message, whereas the weak decoder decodes the common message only.Other channels in which one receiver is superior to anotherand channels with nested information were studied byCsiszár and Körner <cit.> and by El Gamal <cit.>,to name a few. Multi–user information theory is, first and foremost, driven by the quest to characterize capacity regions, i.e., the region of all sets of rates that allow reliable communication (a.k.a. achievable rates). A somewhat sharper performance metric concerns the exponential decay rate (the error exponent) of the probability of error for each user, as a function of the coding rateswithin the interior of the capacity region. On top of that, an interesting question concerns the trade–off between the error exponent of the strong user and the one of the weak user, or equivalently, the achievable region in the plane of error exponents for a given set of coding rates. While the capacity regions of the DBC and the ABC have been known for many years,only little has been known about their reliability functions. Earlier works on error exponents for the general DBC and ABCinclude those of Gallager <cit.> and Körner and Sgarro <cit.>, respectively.In both works, the coding scheme of <cit.> was adopted,but the decoder was sub–optimal.More recently, Kaspi and Merhav <cit.> have derived some tighter lower bounds to the reliability functions of both users by analyzing random coding error exponents of their optimal decoders. While their derivation was exponentially tight at most of the steps,there were still some steps in <cit.> where exponential tightness might have been compromised.Moreover, Kaspi and Merhav have analyzed ensembles of i.i.d.codes, which are not as good as ensembles of fixed composition codes <cit.>. These two points give rise to the thought that thereis room for improvement upon the results of <cit.>, and indeed, such an improvement is one of the contributions of this work. In fact, the exponential error bounds, derived in this paper, both for the strong user and the weak one, are tight in the sense that they provide the exact random coding exponents for the ensemble of fixed composition codes. Moreover, the resulting expressions are much simpler and easier to calculate than those of the best exponential bounds ofKaspi and Merhav (see, in particular, the second part of <cit.>). Interestingly, one of the ingredientsthat contributes significantly to this simplification in the error exponent expressions, is the derivation of universal decoders for both users, and this simplification is achieved thanks to a simple sandwich argument, asserting that a lower bound tothe error exponent of the universal decoder cannot be larger than an upper bound to the error exponent of the optimal decoder, but on the other hand,the latter turns out to be mathematically smaller than or equal to the former, and so,by contrasting the two exponential error bounds, which must therefore beequivalent, the expressions are considerably simplified.In other words, beyond this simplification of the error exponent bounds, there is an additional bonus,which is in obtaining universal decoders for both users. These decoders achieve the same random coding error exponentsas the corresponding optimal decoders of the two users. Both universal decoders are certain variants of the maximum mutual information (MMI) decoder <cit.>, but they are different from the earlier proposed MMI-like universal decoders for the ABC, due to Körner and Sgarro <cit.>. For one thing, our universal decoder for the weak user depends explicitly on the entire code,unlike the one in <cit.>, which depends on the cloud centers only. Since we rely heavily on the method of types, our exponential error bounds have the flavor of those of Csiszár and Körner <cit.>. While exponentially tight, their shortcoming is that they are not easy to calculate since they involve minimizations over auxiliary channels, and these might be computationally painful especially for large alphabets. To alleviate this difficulty, we also propose Gallager–style bounds <cit.>, which require optimizations over veryfew (one or two) parameters, but the caveat is that exponential tightness might be sacrificed. Moreover, the Gallager–style bounds lend themselves to better intuitive understanding on the behavior ofthe error exponents for both of the users.Specifically, we derive a phase diagram for the weak user,which fully describes the functional behavior of the boundin different regions of the plane of rates.We also demonstrate our results numericallyfor an example of the binary symmetric BC, and compare our results to those in earlier works, showing explicitly the improvement. The remaining part of the paper is organized as follows.In Section 2, we establish notation conventions, formalize the model and the problem, and finally, reviewsome preliminaries.In Section 3, we summarize the main theoretical results of this paper, and give some numerical results for the binary symmetric BC.Section 4 provides the proofs concerning the strong user in the ABC (the exact random coding error exponent and the universal decoder), andSection 5 contains a similar treatment for the weak user. In Section 6, we derive lower bounds on the exact random coding error exponents,and in Section 7 we study them.§ NOTATION CONVENTIONS AND PROBLEM FORMULATION§.§ Notation ConventionsThroughout the paper, random variables will be denoted by capital letters, specific values theymay take will be denoted by the corresponding lower case letters, and their alphabets will bedenoted by calligraphic letters. Random vectors and their realizations will be denoted,respectively, by capital letters and the corresponding lower case letters, both in the bold facefont. Their alphabets will be superscripted by their dimensions.For example, the random vector 𝐗 = (X_1, … , X_n), (n - positive integer)may take a specific vector value = (x_1, … , x_n) in 𝒳^n,the n-th order Cartesian power of 𝒳, which is the alphabet of each component of this vector. Sources and channels will be subscripted by the names of the relevant randomvariables/vectors and their conditionings, whenever applicable,following the standard notation conventions, e.g., Q_X, Q_Y|X, and so on.When there is no room for ambiguity, these subscripts will be omitted. For a generic jointdistribution Q_XY = {Q_XY(x,y), x ∈𝒳, y ∈𝒴}, which will oftenbe abbreviated by Q, information measures will be denoted in the conventional manner, but with a subscript Q, that is, H_Q(X)is the marginal entropy of X, H_Q(X|Y) is the conditional entropy of X given Y,I_Q(X;Y) = H_Q(X) - H_Q(X|Y) is the mutual information between X and Y,and so on. The weighted divergence betweentwo conditional distributions (channels), say, Q_Z|X and W = {W(z|x),x ∈𝒳, z ∈𝒵}, with weighting Q_X is defined asD(Q_Z|X || W | Q_X) = ∑_x ∈𝒳 Q_X(x)∑_z ∈𝒵 Q_Z|X(z|x) logQ_Z|X(z|x)/W(z|x),where logarithms, here and throughout the sequel, are taken to the natural base. The probability of an event ℰ will be denoted byPr{ℰ}, and the expectation operator with respect to (w.r.t.) aprobability distribution P will be denoted by 𝔼{·}.For two positive sequences a_n and b_n, the notation a_n≐ b_n will standfor equality in the exponential scale, that is, lim_n →∞1/nloga_n/b_n = 0. The indicator function of an event ℰwill be denoted by ℐ{ℰ}.The notation [x]_+ will stand for max{0, x}. The empirical distribution of a sequence ∈𝒳^n, which willbe denoted by P̂_, is the vector of relative frequencies, P̂_(x),of each symbol x ∈𝒳 in .The type class of ∈𝒳^n, denoted 𝒯(),is the set of all vectors ' with P̂_' = P̂_.When we wish to emphasize the dependence of the type class on the empiricaldistribution P̂, we will denote it by 𝒯(P̂). Information measuresassociated with empirical distributions will be denoted with 'hats' and will be subscriptedby the sequences from which they are induced. For example, the entropy associatedwith P̂_, which is the empirical entropy of , will be denotedby Ĥ_(X). Similar conventions will apply to the joint empirical distribution,the joint type class, the conditional empirical distributions and the conditional type classesassociated with pairs (and multiples) of sequences of length n.Accordingly, P̂_ would be the joint empirical distributionof (, ) = {(x_i, y_i)}_i=1^n, 𝒯(,) or 𝒯(P̂_) will denote the joint typeclass of (, ), 𝒯(| ) will stand for theconditional type class ofgiven , Ĥ_(X,Y)will designate the empirical joint entropy ofand, Ĥ_(X|Y) will be the empirical conditional entropy,Î_(X;Y) will denote the empirical mutual information, and so on.When we wish to emphasize the dependence of 𝒯(| ) upon and the relevant empirical conditional distribution, Q_X|Y =P̂_|, we denote it by 𝒯( Q_X|Y | ).Similar conventions will apply to triples of sequences, say,{(, , )}, etc. Likewise, when we wish to emphasize thedependence of empirical information measures upon a given empirical distribution given by Q,we denote them using the subscript Q, as described above.§.§ Problem Formulation We consider a memoryless ABCwith a finite input alphabet 𝒳 and finite output alphabets 𝒴 and 𝒵.Let W_1 = {W_1(y|x), x ∈𝒳, y ∈𝒴}and W_2 = {W_2(z|x), x ∈𝒳, z ∈𝒵} denote the single–letter input–output transition probability matrices, associated with the strong user and the weak user, respectively. When these channels are fed by an input vector ∈𝒳^n,they produce the corresponding output vectors ∈𝒴^n and ∈𝒵^n, according toW_1(|) =∏_t=1^nW_1(y_t|x_t),W_2(|) =∏_t=1^nW_2(z_t|x_t).We are interested in sendingone out of M_yM_z messages to the strong user,that observes , andone out of M_z messages to the weak user,that observes .Specifically,consider the following mechanism of random selection of an hierarchical code for the ABC. Let 𝒰 be a finite alphabet,let P_U be a given probability distribution on 𝒰,and let P_X|U be a given matrix of conditional probabilities of X given U.We first select, independently at random, M_z = e^n R_z n-vectors(“cloud centers”),_0, _1, …, _M_z-1,all under the uniform distribution over the type class 𝒯(P_U).Next, for each i = 0,1, …, M_z - 1, we select conditionallyindependently (given _i), M_y = e^n R_y codewords,_i,0, _i,1, …, _i,(M_y-1), under the uniform distribution across the conditional typeclass 𝒯(P_X|U|_i). We denote the sub-code 𝒞_i = {_i,0, _i,1, …, _i,(M_y-1)}. Once selected, the entire codebook 𝒞=∪_i=0^M_z-1_i,together with the collection of all cloud centers, {_0, _1, …, _M_z-1}, are revealed to the encoder and to both decoders. The optimal decoder for the strong user is given by[î(), ĵ()] = *arg max_0 ≤ i ≤ M_z-1, 0 ≤ j ≤ M_y-1 W_1(|_i,j) ,while the optimal decoder for the weak user (the bin index decoder) is given byĩ() = *arg max_0 ≤ i ≤ M_z-1 W_2(|𝒞_i),whereW_2(|𝒞_i)△=1/M_y∑_∈𝒞_iW_2(|) = 1/M_y∑_j=0^M_y-1 W_2(|_i,j). Let 𝐘∈𝒴^n and 𝐙∈𝒵^n be the channel outputs resulting from the transmission of 𝐗_i,j. Define the average error probabilities of decoders (<ref>) and (<ref>) asP̅_(R_y, R_z)=1/M_y M_z∑_i=0^M_z-1∑_j=0^M_y-1Pr{ [î(𝐘), ĵ(𝐘)] ≠ (i,j) | 𝐗_i,j } ,andP̅_ (R_y, R_z)=1/M_y M_z∑_i=0^M_z-1∑_j=0^M_y-1Pr{ĩ(𝐙) ≠ i| 𝐗_i,j } ,where in both definitions, Pr{·}designates probabilities associated with therandomness of the codebook, as well as that of the channel outputs given its input. The corresponding random coding error exponents are defined asE_(R_y, R_z) = lim_n →∞[ - lnP̅_ (R_y, R_z)/n], andE_(R_y, R_z) = lim_n →∞[ - lnP̅_ (R_y, R_z)/n],provided that the limits exist. Our main objective is to obtainsingle–letter expressions forE_(R_y, R_z) and E_(R_y, R_z). As for the universal decoders, consider first the weak user.We wish to find a function F(,_i,𝒞_i),that is independent of the (unknown) parameters of the channel W_2, such that the following universal decoder for the weak userĩ_() = *arg max_0 ≤ i ≤ M_z-1 F(,_i,𝒞_i)achieves an average error probability whose exponent is E_(R_y, R_z). By the same token, we wish to find a universal decoder for the strong user, of the form [î_(), ĵ_()]= *arg max_0 ≤ i ≤ M_z-1, 0 ≤ j ≤ M_y-1 G(,_i,_i,j) ,where the function G is independent of W_1, yet the decoder [î_(), ĵ_()] achieves E_(R_y, R_z).§ MAIN RESULTS§.§ Exact Random Coding Error ExponentsLet Q_UXY and Q_UXZ denote two generic joint probability distributions ofthe random vectors (U,X,Y) and (U,X,Z),whose (UX)-marginals are both identical to P_UX.DefineE_y(Q_ UXY, R_y, R_z) =min{[I_Q(U;Y)+ [ I_Q(X;Y|U) - R_y]_+ - R_z]_+, [ I_Q(X;Y|U) - R_y]_+},andE_z(Q_ UXZ, R_y, R_z) = [I_Q(U;Z)+[ I_Q(X;Z|U) - R_y]_+ - R_z]_+.Our first main result is the following. Theorem 1. Under the assumptions of Section 2, the limits (<ref>) and (<ref>) exist and are given by the following single–letter expressions: E_(R_y, R_z)= min_ Q_Y|UX{D(Q_Y|UX W_Y|X |P_UX )+E_y(Q_ UXY, R_y, R_z) },E_(R_y, R_z)= min_ Q_Z|UX{D(Q_Z|UX W_Z|X |P_UX )+E_z(Q_ UXZ, R_y, R_z) }.We prove the result concerning the strong user in Section 4 and the result concerning the weak user in Sections 5. Notice that both error exponents depend on both coding rates,in contrast to the error exponents givenin the previous works <cit.> and <cit.>. Several remarks are now in order. ∙ An immediate byproduct of Theorem 1 is finding the set of rate pairs (R_y, R_z) for which both E_(R_y, R_z) >0 and E_(R_y, R_z) >0. It is not difficult to show that this set is given by:ℛ = {(R_y, R_z) |  R_y< I(X;Y|U), R_y+R_z< I(X;Y), R_z< I(U;Z)},evaluated with the distribution P_UX× W_Y|X× W_Z|X. The convex hull of the closure of the union over all code distributions { P_UX} gives the capacity region. We may also consider an individual attainable region for each user, i.e., the set of rate pairs for which the probability of error vanishes for one of the users, but without taking into account the other user. Later on, individual attainable regions will become relevant when we consider the phase diagrams. It is not difficult to show that the attainable region for the weak user, to be denoted by ℛ_, is given by ℛ_ = {(R_y, R_z) |  R_y+R_z< I(X;Z)}∪{(R_y, R_z) | R_z< I(U;Z)}, evaluated with the distribution P_UX× W_Z|X, while the attainable region for the strong user, to be denoted by ℛ_, is given by ℛ_ = {(R_y, R_z) |  R_y+R_z< I(X;Y)}∩{(R_y, R_z) | R_y< I(X;Y|U)}, evaluated with the distribution P_UX× W_Y|X. Notice that the attainable region of the weak user is not bounded, i.e., reliable bin index decoding may still be guaranteed for any satellites rate R_y, as long as R_z< I(U;Z). ∙ The computation of the error exponents involves minimizations over auxiliary channels Q_Y|UX and Q_Z|UX. For large input and output alphabets, we are motivated to look for alternative expressions for the error exponents, whose optimization does not depend on the alphabet sizes, even at the expense of some loss in the exponential tightness. We will discuss such an alternative form in the sequel.∙ Both error exponents depend on the input distribution. While in the single-user regime, we may maximize the final expression over the input distribution in order to maximize the error exponent, this is no longer the case for the ABC. Even in the simplest case of a binary symmetric BC, we see that the best code for the strong user is the worst one for the weak user, and vice versa. To see why is that true, let P_U=(1/2, 1/2) and let P_X|U be a BSC with a crossover probability 1/2. In this case, the hierarchy of the codebook degenerates, i.e., the codebook has a constant composition, which is best for the strong user. In the other extreme, P_X|U is a BSC with a crossover probability 0. The error probability of the strong user is almost one, but the error exponent of the weak user is the largest and independent of R_y.Hence, the choice of the input distribution trades off between the error exponents of the two users. ∙ As can be seen from the minimum in eq. (<ref>), there are two different kinds of error events for the strong user. Let Q^* denote the minimizer in (<ref>). Now, if for some (R_y, R_z), the inequality [I_Q^*(U;Y)+[ I_Q^*(X;Y|U) - R_y]_+ - R_z]_+ > [ I_Q^*(X;Y|U) - R_y]_+ holds, then the dominant error event for the strong user is caused by competing codewords from the true cloud, otherwise, the dominant error event is caused by competitive clouds.∙ In fact, the cardinality || is a free parameter in our problem. As such, we may let || →∞, and it is definitely not obvious that a finite || is optimal. This is because we cannot see how to apply the usual cardinality bounding techniques based on the support lemma <cit.>. It must be clear that even if the optimal || is finite, it may not be the same as the bound given in the converse theorem of the capacity region of the ABC (|| ≤ || + 2) <cit.>.§.§ Universal Decoders As mentioned in the Introduction, universal MMI decoders for bothreceivers were proposed in <cit.>, where for the weak user, this decoder was defined by:ĩ_() = *arg max_0 ≤ i ≤ M_z-1Î__i(U;Z) .The error exponent of such a decoder isinferior to the error exponent of the optimal (ML) decoder,because for one thing, it makes no use of {_i}, but only of the cloud centers.The universal decoder (<ref>) achieves the following error exponent <cit.>E_(R_z) = min_ Q_Z|UX{ D(Q_Z|UX W_Z|X |P_UX )+ [ I_Q(U;Z) - R_z ]_+},and by comparing it numerically to (<ref>) in the case of the binary symmetric BC (see Subsection 3.4),it is evident that E_(R_y, R_z) can be strictly higher than E_(R_z),due to the additional term in (<ref>). Hence, one may wonder whether a different universal decoder exists,whose error exponent is as large as E_(R_y, R_z). It turns out that the answer to this question is affirmative, and indeed, this universal decoder relies entirely onand {_i}. In Section 5, we prove the following theorem. Theorem 2. Define the functionF(, _i, 𝒞_i)= max_0 ≤ j ≤ M_y-1{Î__i(U;Z)+[ Î__i_ij(X;Z|U)- R_y ]_+} .The universal decoderĩ_() = *arg max_0 ≤ i ≤ M_z-1 F(, _i, 𝒞_i)achieves E_(R_y, R_z). It turns out that there is also another universal decoder (with the same error exponent), whose structure is much more similar to the ML decoder of (<ref>),in the sense that its metric is based on summation over _i, except that here, the unknown likelihood function is replaced by the exponentiated empirical mutual information. In the Appendix we prove the following theorem. Theorem 3. The universal decoderĩ_() = *arg max_0 ≤ i ≤ M_z-1{∑_j=0^M_y-1e^n Î__i_ij(UX;Z) }achieves E_(R_y, R_z). We next proceed to the strong user and present a universal decoder. It turns out that the MMI–like metricof the universal bin index decoder, as given in Theorem 2 (but withreplaced by ), works well also for the strong user.The main difference between them is rooted in the way they use the metric.While the weak user first maximizes it within each cloud,and only then finds the cloud with the maximal value,the strong user maximizes it over both indices simultaneously.More precisely, we claim the following, which is proved in Section 4. Theorem 4. Define the functionG(, _i, _ij) = Î__i(U;Y)+[ Î__i_ij(X;Y|U)- R_y ]_+ .The universal decoder[ĩ_(), j̃_()] = *arg max_0 ≤ i ≤ M_z-1, 0 ≤ j ≤ M_y-1 G(, _i, _ij)achieves E_(R_y, R_z). At this point, it is interesting to compare [ĩ_(), j̃_()] to the universal decoder of the strong user in <cit.>,[î_(), ĵ_() ] = *arg max_0 ≤ i ≤ M_z-1, 0 ≤ j ≤ M_y-1Î__i_ij(UX;Y),and whose random coding error exponent is given by <cit.>E_(R_y, R_z) = min_ Q_Y|UX{D(Q_Y|UX W_Y|X |P_UX )+E_y,(Q_ UXY, R_y, R_z) },where E_y,(Q_ UXY, R_y, R_z) =min{[I_Q(UX;Y)- (R_y + R_z)]_+, [ I_Q(X;Y|U) - R_y]_+}.By the identity I(UX;Y) = I(U;Y) + I(X;Y|U), it iseasy to see that E_(R_y, R_z) = E_(R_y, R_z), proving that (<ref>) has an error exponent as that of (<ref>), a fact that was not asserted in <cit.>.§.§ Gallager–Style Lower Bounds As mentioned before, the calculations of (<ref>) and (<ref>)involve minimizations over auxiliary channels,which become painful when the input and output alphabets are large.For this reason, we look for other forms of error exponent formulas,where the number of parameters to be optimized does not grow with the alphabet sizes,but the price of this might be some loss in the tightness of the bounds,i.e., we obtain lower bounds on the random coding error exponents. Even in the single user case, the random coding error exponent involves a minimization over an auxiliary channel, where Csiszár and Körner <cit.> show that the exact error exponent is lower bounded by the following expression E_(R) = max_ρ∈ [0,1] {- log∑_y[ ∑_x P(x)W^1/1+ρ(y|x) ]^1+ρ-ρ R },where the subscript 'G' stands for “Gallager", who was the first to derive and analyze the error exponent in this form <cit.>. It is important to note that for the optimal code distribution, (<ref>) is not only a lower bound, but the exact random coding error exponent <cit.>. It turns out that the exact random coding error exponents of the two users in the ABC can be lower bounded by the same methods as in <cit.>. In Section 6, we prove the following theorem. Theorem 5. Define the functionsΦ(u,y,s)= ∑_x P(x|u)[W_1(y|x)]^1/1+s,Ψ(u,z,s)= ∑_x P(x|u)[W_2(z|x)]^1/1+s.The exact random coding error exponent of the strong user is lower bounded byE_(R_y, R_z) ≥min{E_ y,1 (R_y), E_ y,2 (R_y, R_z)},whereE_ y,1 (R_y)= max_ρ∈ [0,1] {- ∑_u P(u) log( ∑_yΦ^1+ρ(u,y,ρ))- ρ R_y}, E_ y,2 (R_y, R_z)= max_μ∈ [0,1] max_λ∈ [0,μ] { -log[ ∑_y(∑_u P(u)Φ^1+λ/1+μ(u,y,λ))^1+μ] -λ R_y-μ R_z}.In addition, the random coding error exponentof the weak user is lower bounded byE_(R_y, R_z) ≥max_μ∈ [0,1] max_λ∈ [0,μ] { -log[ ∑_z(∑_u P(u)Ψ^1+λ/1+μ(u,z,λ))^1+μ] -λ R_y-μ R_z}.These lower boundsinvolve maximizations over one or two parameters only, in contrast to the original error exponents, and so, they are much easier to evaluate. In Section 7, we study them and show how they behave in different regions of the plane of rates. In contrast to the single user case, both lower bounds of the two users depend on the code distribution, but now we are no longer able to optimize both of them simultaneously, for the reason we mentioned above, in subsection 3.1.§.§ Numerical Results and Phase DiagramsWe next provide some numerical results, comparing our exponents to those of <cit.> and <cit.>.Let W_1 and W_2 be two binary symmetric channels (BSC´s) with crossover parameters p_y and p_z, respectively (p_z > p_y). Let 𝒰 be binary as well and let P_U beuniformly distributed over {0,1}.Also, let P_X|U be a BSC with crossover parameter β∈ [0,1].The capacity region of our model is given by:R_z ≤ln 2 - h(β * p_z)R_y ≤ h(β * p_y) - h( p_y),where β * p = β(1-p) + (1 - β)p and h(x) is the binary entropy function.§.§.§ Gallager-Style Lower Bounds Using Theorem 5, we find that for the strong user,E_(R_y, R_z)≥min{ E_y,1(R_y),E_y,2(R_y, R_z)},where,E_y,1(R_y) = max_ρ∈ [0,1] {-log{[(1-β)(1-p_y)^1/1+ρ + β· p_y^1/1+ρ]^1+ρ +[(1-β) · p_y^1/1+ρ + β· (1-p_y)^1/1+ρ]^1+ρ} - ρ R_y}, E_y,2(R_y, R_z) = max_μ∈ [0,1] max_λ∈ [0,μ] {-ln 2- (1+μ) ·log{1/2·[ (1-β)(1-p_y)^1/1+ λ + β· p_y^1/1+λ]^1+λ/1+μ+1/2·[(1-β) · p_y^1/1+ λ + β· (1-p_y)^1/1+ λ]^1+λ/1+μ}- λ R_y-μ R_z}.For the weak user,E_(R_y, R_z) ≥max_μ∈ [0,1] max_λ∈ [0,μ] {-ln 2- (1+μ) ·log{1/2·[ (1-β)(1-p_z)^1/1+ λ + β· p_z^1/1+λ]^1+λ/1+μ+1/2·[(1-β) · p_z^1/1+ λ + β· (1-p_z)^1/1+ λ]^1+λ/1+μ}- λ R_y-μ R_z}.We present the lower bounds by plotting families of curves, one for each exponent, as a function of one rate, while the other rate is kept fixed.Let us choose the channel probabilities to be p_y = 0.05 and p_z = 0.1, and β = 0.25.In Fig. 1, we plot lower bounds to E_(R_y, R_z) as a function of R_y, as given by (<ref>), where R_z takes five different values.As long as R_z < 0.09, the dominant error event is caused by wrong codewords from the true cloud. In this case, the error exponent is independent of the number of clouds and is given by the dark blue curve. As R_z increases more, we find that above some critical rate, the error exponent begins to depend on the number of clouds, since the dominant error event is due to wrong codewords from competitive clouds. When the rate of the weak user is high, i.e., when the exponential number of clouds is higher than the capacity of the channel to the strong user (R_z > 0.49 ≈ h(0.05)), reliable communication is no longer possible. In Fig. 2, we plot lower bounds to E_(R_y, R_z) as a function of R_z, as given by (<ref>), where R_y takes five different values.At R_y = 0, we should obtain the error exponent of a single user. In this case, the numerical value at zero-rate is given by E_(0,0) = 0.22314, and E_(0,R_z) vanishes at R_z≅ 0.36 ≈ h(0.1), which is the capacity of the channel to the weak user. For R_y > 0.32, E_(R_y, R_z) becomes independent of R_y, and is given by the red curve. In this case, we get a lower bound to the error exponent of the equivalent binary symmetric channel from the cloud center U to the channel output of the weak user Z. §.§.§ Exact ExponentsAs for the exact random coding error exponents, given by Theorem 1, the optimization problems require the minimization over the auxiliary channels Q_Y|UX and Q_Z|UX. Let us compare the Gallager-style lower bounds to the exact exponents. In Fig. 3, we see two pairs of curves of the exact exponents and their lower bounds, where R_z=0.05 and β takes two different values. The exact exponents are strictly better than the Gallager-style exponents. Similar results are obtained for the weak user as well (not shown here). It is important to note that in some regions in the R_y-R_z plane, the lower bounds are equal to the exact random coding error exponents. §.§.§ Comparison with Previous WorksAs far as we know, no other works on universal decoding for the ABC exists, besides the one of <cit.>. Although the error exponent of the strong user given there is optimal w.r.t. the ML decoder, it is not the case for the weak user. The universal decoder of <cit.> for the weak user uses only the cloud centers and is independent of R_y, while the new universal decoder of Theorem 2 makes use of the entire codebook, which is the main reason for the resulted improvement. The difference between the error exponents is larger for lower values of R_y. As before, let p_z = 0.1 and β = 0.25.Fig. 4 demonstrates the difference between the error exponents of the two universal decoders in the extreme case of R_y = 0. To the best of our knowledge, the most up-to-date work on exponential lower bounds to the reliability functions of the ABC is <cit.>, where random coding error exponents were derived using two different techniques. Each of those derivations includes at least one step that may not be exponentially tight.Also, in <cit.>, the random codebooks are assumed to be drawn i.i.d.. We expect our proposed exact random coding error exponents to improve on <cit.>, because of two reasons: first, our analysis is exponentially tight, and second, our ensemble is of the uniform distribution across types. This kind of random codes are known <cit.> to be better than the i.i.d. ensembles. Our comparison here focuses on the error exponent of the weak user only. Again, let p_z = 0.1, β = 0.25 and R_y = 0.4.Fig. 5 compares the two error exponents, and shows that the new exponent is better. §.§.§ Phase DiagramsIn the single user case, it is known that the error exponent behaves differently in different ranges of rates, i.e., it is affine at low rates and curvy at high rates. By the same token, for the ABC, the plane of rates can be divided into several different regions, where in each one of them, the error exponents behaves differently. This partition of the plane of rate pairs is of course, more involved than in the single-user case. We refer to it as a phase diagram, a term borrowed from physics. In order to study the various types of behavior of the lower bound of Theorem 5, let us invoke the following alternative and equivalent lower bound for the random coding error exponent of the weak userE_(R_y, R_z) ≥max_μ∈ [0,1] max_s ∈ [0,1] { -log[ ∑_z(∑_u P(u)Ψ^1+s μ/1+μ(u,z, sμ))^1+μ] -s μ R_y-μ R_z}.Since the maximization region is now the unit square, this form is more convenient to analyze than that of (<ref>).Fig. 6 displays a partition of the plane R_y-R_z to different regions for the Gallager-style lower bound of the weak user, where β=0.1, and p_z = 0.1. Although not shown here, the phase diagrams of the exact exponents behave similarly.The study in Section 7 provides a characterization of the different regionsfrom the viewpoint of the type of dependence of the error exponent upon the rates and the maximizers s^* and μ^* (see Table 1). § UNIVERSAL DECODING FOR THE STRONG USER§.§ Analysis for a General Decoder Let us first derive the exact random coding error exponent for a strong user that uses the following generic decoder[î (), ĵ ()] = *arg max_0 ≤ i ≤ M_z-1, 0 ≤ j ≤ M_y-1 f(Q_U_iX_ijY),where from now on, Q_UXY will designates the joint empirical distribution induced by the three sequences ,and , i.e., Q_UXY = P̂_.The average error probability P̅_e(R_y,R_z,n), associated with (<ref>) is P̅_e(R_y,R_z,n)△=1/M_y M_z∑_i=0^M_z-1∑_j=0^M_y-1 Pr{{⋃_k ≠ j{ f(Q_U_iX_ikY) ≥ f(Q_U_iX_ijY) | 𝐗_ij sent}}⋃{⋃_l ≠ i⋃_k{ f(Q_U_lX_lkY) ≥ f(Q_U_iX_ijY) | 𝐗_ij sent}}}where Pr{·} pertains to the randomness of the codebook as well as that of the channel output given its input. Without loss of generality, we assume throughout, that the transmitted codeword is 𝐗_00 = _00.We define𝒜△=⋃_k=1^M_y-1𝒜_k△=⋃_k=1^M_y-1{f(Q_U_0X_0kY) ≥ f(Q_U_0X_00Y) }and ℬ△=⋃_l=1^M_z-1ℬ_l△=⋃_l=1^M_z-1⋃_k=0^M_y-1ℬ_lk△=⋃_l=1^M_z-1⋃_k=0^M_y-1{ f(Q_U_lX_lkY) ≥ f(Q_U_0X_00Y) } .Define the real number s as s△= f(Q_U_0X_00Y).The pairwise average error probability, conditioned on the center of the competitive cloud, is given byPr(ℬ_lk | 𝐔_l = ')△=Pr{ f(Q_U_lX_lkY) ≥ f(Q_U_0X_00Y) | 𝐔_l = ' }= ∑_{':f(Q_U'X'Y)≥ s} P ( ' | ' ) =∑_{ Q_X'|U'Y∈𝒮(Q_U'Y) :  f(Q_U'X'Y)≥ s }∑_∈𝒯(Q_X'|U'Y|',) P (| ' ) =∑_{ Q_X'|U'Y∈𝒮(Q_U'Y) :  f(Q_U'X'Y)≥ s } P ( ' | ' ) ·|𝒯(Q_X'|U'Y|',)| ≐∑_{ Q_X'|U'Y∈𝒮(Q_U'Y) :  f(Q_U'X'Y)≥ s }exp{-n ·I_ Q(X;Y|U) }≐max_{ Q_X'|U'Y∈𝒮(Q_U'Y) :  f(Q_U'X'Y)≥ s}exp{-n ·I_ Q(X;Y|U)}= exp{-n ·min_{ Q_X'|U'Y∈𝒮(Q_U'Y) :  f(Q_U'X'Y)≥ s}I_ Q(X;Y|U)}△=exp{-n · E_0(s, Q_U'Y )} ,where 𝒮(Q_UY) denotes the set of conditional distributions {Q_X|UY} that are consistent with P_UX. For a given 𝐔_l = ', the events {ℬ_lk}_k are all pairwise independent since we have assumed that the various codewords are pairwise conditional independent given the cloud center. Using the exponential tightness of the truncated union bound <cit.>, we getPr{⋃_k=0^M_y-1ℬ_lk| 𝐔_l = ' } ≐min{ 1, ∑_k=0^M_y-1Pr(ℬ_lk | 𝐔_l = ')}= min{ 1, M_y·Pr(ℬ_l,0 | 𝐔_l = ')}≐min{ 1, e^nR_y·exp[ -n · E_0(s, Q_U'Y )]}△=exp{-n · E_1(s, Q_U'Y)},whereE_1(s, Q_U'Y ) =min_Q_X'|U'Y∈𝒮(Q_U'Y){[ I_ Q(X;Y|U) - R_y]_+:f(Q_U'X'Y)≥ s}.Next, we obtain the probability of ℬ_l by calculating the expectation w.r.t. the randomness of 𝐔_l:Pr{ℬ_l} =∑_' ∈𝒯(P_U)P_U(')·Pr{⋃_k=0^M_y-1ℬ_lk| 𝐔_l = ' }≐∑_' ∈𝒯(P_U)P_U(')·exp{-n · E_1(s, Q_U'Y )}= ∑_{ Q_U'|Y∈𝒮(Q_Y) }∑_∈𝒯(Q_U'|Y|)P_U()·exp{-n · E_1(s, Q_ŨY )}= ∑_{ Q_U'|Y∈𝒮(Q_Y) }|𝒯(Q_U'|Y|)|/|𝒯(')|·exp{-n · E_1(s, Q_U'Y )}≐∑_{ Q_U'|Y∈𝒮(Q_Y) }exp{-n ·[ I_ Q(U;Y)+E_1(s, Q_U'Y )]}≐exp{-n ·min_{ Q_U'|Y∈𝒮(Q_Y) }[I_ Q(U;Y)+ E_1(s, Q_U'Y ) ]}△=exp{-n · E_2(s, Q_Y )},where 𝒮(Q_Y) is the set of all {Q_U|Y} such that ∑_yQ_Y(y)Q_U|Y(u|y)=P_U(u) for every u ∈𝒰.Next, we turn to calculate the probabilities of the events 𝒜_k. One can easily check that the entire derivation of eqs. (<ref>)-(<ref>) holds in this case as well, except that now we condition on 𝐔_0 = _0, such that the codewords are drawn from P(· | _0). We get Pr(𝒜_k | 𝐔_0 = _0)△=Pr{ f(Q_U_0X_0kY) ≥ f(Q_U_0X_00Y) | 𝐔_0 = _0}≐exp{-n · E_0(s, Q_U_0Y )} .Notice that, for a given 𝐔_0 = _0, 𝐗_00 = _00 and 𝐘 =, the events {𝒜_k} (errors caused by codewords from the correct cloud) and {ℬ_l} (errors caused by codewords from competitive clouds) are all pairwise independent. Thus, after taking the expectation w.r.t. the joint distribution of (𝐔_0,𝐗_00, 𝐘), we haveP̅_e(R_y,R_z,n) =𝔼[Pr{{⋃_k=1^M_y-1𝒜_k}⋃{⋃_l=1^M_z-1ℬ_l}| 𝐔_0 = _0, 𝐗_00 = _00, 𝐘 = }] ≐𝔼[ min{ 1, ∑_k=1^M_y-1Pr(𝒜_k | 𝐔_0 = _0, 𝐗_00 = _00, 𝐘 = )+ ∑_l=1^M_z-1Pr(ℬ_l | 𝐔_0 = _0, 𝐗_00 = _00, 𝐘 = )}]≐𝔼[ min{1, e^nR_yexp{-n · E_0(S, Q_U_0Y )} +e^nR_zexp{-n · E_2(S, Q_Y )}}]≐𝔼[ min{1, exp{ -n ·min{[ E_0(S, Q_U_0Y ) -R_y] ,                        [ E_2(S, Q_Y ) -R_z] }}}] = 𝔼[ exp( -n ·min{[ E_0(S, Q_U_0Y ) -R_y]_+ , [ E_2(S, Q_Y ) -R_z]_+})] ≐exp{-n ·min_ Q_Y|U_0X_00[D(Q_Y|U_0X_00||W_Y|X_00| P_U_0X_00)                   + E_3( f(Q_U_0X_00Y), Q_U_0Y, R_y, R_z)] },where we have definedE_3(S, Q_U_0Y, R_y, R_z )△=min{[ E_0(S, Q_U_0Y ) -R_y]_+ , [ E_2(S, Q_Y ) -R_z]_+}.§.§ A Converse-Like[A converse result is usually w.r.t. both encoding and decoding. In our case, here and in Subsection 5.1, the converse results are w.r.t. the decoding only.] Result for the Strong UserWe have the following:Lemma 1. For every empirical distribution Q_U_0X_00Y,E_3( f(Q_U_0X_00Y), Q_U_0Y , R_y, R_z) ≤min{[I_Q(U_0;Y)+[ I_Q(X_00;Y|U_0) - R_y]_+ - R_z]_+, [ I_Q(X_00;Y|U_0) - R_y]_+}.Proof. We start by recalling that the function E_3 is defined asE_3(f(Q_U_0X_00Y), Q_U_0Y, R_y, R_z )△=min{ [ E_0(f(Q_U_0X_00Y), Q_U_0Y ) -R_y]_+ , [ E_2( f(Q_U_0X_00Y) , Q_Y ) -R_z]_+},and we separately upper bound each one of the terms. We can upper bound them by choosing any specific distribution, instead of minimizing over them. Let us start with the left term:[ E_0(f(Q_U_0X_00Y), Q_U_0Y ) -R_y]_+= min_{ Q_X|U_0Y∈𝒮(Q_U_0Y):  f(Q_U_0XY) ≥ f(Q_U_0X_00Y)}[ I_Q(X;Y|U_0) - R_y]_+≤[ I_Q(X_00;Y|U_0) - R_y]_+.For the right term inside the minimum of (<ref>), we have the following [ E_2(f(Q_U_0X_00Y), Q_Y ) -R_z]_+= min_{ Q_U'|Y∈𝒮(Q_Y) }[I_ Q(U';Y)+ E_1( f(Q_U_0X_00Y), Q_U'Y )-R_z]_+≤[I_ Q(U_0;Y)+ E_1(f(Q_U_0X_00Y), Q_U_0Y )-R_z]_+= [I_ Q(U_0;Y)+min_{ Q_X|U_0Y∈𝒮(Q_U_0Y) :   f(Q_U_0XY) ≥ f(Q_U_0X_00Y)}[ I_Q(X;Y|U_0) - R_y]_+ -R_z]_+≤[I_ Q(U_0;Y)+[ I_Q(X_00;Y|U_0) - R_y]_+-R_z]_+ .Combining both upper bounds, we see that (<ref>) holds, thus completing the proof. §.§ An Optimal Universal DecoderLet us now selectf(Q_UXY) = I_Q(U;Y)+ [ I_Q(X;Y|U) - R_y]_+.We show that (<ref>) achieves the maximum value of E_3( f(Q_U_0X_00Y), Q_U_0Y , R_y, R_z), as given by Lemma 1, and therefore, this decoder has the same error exponent as the one of the optimal (ML) decoder. As before, we start with the left term inside the minimum of (<ref>), and get[ E_0(f(Q_U_0X_00Y), Q_U_0Y ) -R_y]_+= min_{ Q_X|U_0Y∈𝒮(Q_U_0Y)}{[ I_Q(X;Y|U_0) - R_y]_+:  f(Q_U_0XY) ≥ f(Q_U_0X_00Y)}= min_{ Q_X|U_0Y∈𝒮(Q_U_0Y)}{[ I_Q(X;Y|U_0) - R_y]_+: I_Q(U_0;Y)+ [ I_Q(X;Y|U_0) - R_y]_+≥ I_Q(U_0;Y)+ [ I_Q(X_00;Y|U_0) - R_y]_+}= min_{ Q_X|U_0Y∈𝒮(Q_U_0Y)}{[ I_Q(X;Y|U_0) - R_y]_+:[ I_Q(X;Y|U_0) - R_y]_+≥[ I_Q(X_00;Y|U_0) - R_y]_+}= [ I_Q(X_00;Y|U_0) - R_y]_+.For the right term inside the minimum of (<ref>), [ E_2(f(Q_U_0X_00Y), Q_Y ) -R_z]_+= min_{ Q_UX|Y∈𝒮(Q_Y) :   f(Q_UXY) ≥ f(Q_U_0X_00Y)}[I_ Q(U;Y)+ [ I_Q(X;Y|U) - R_y]_+-R_z]_+= min_{ Q_UX|Y∈𝒮(Q_Y) }{[I_ Q(U;Y)+ [ I_Q(X;Y|U) - R_y]_+-R_z]_+ : I_Q(U;Y)+ [ I_Q(X;Y|U) - R_y]_+≥ I_Q(U_0;Y)+ [ I_Q(X_00;Y|U_0) - R_y]_+}= [I_ Q(U_0;Y)+ [ I_Q(X_00;Y|U_0) - R_y]_+-R_z]_+.Finally, compare the minimum between (<ref>) and (<ref>) to the right hand side of (<ref>).§ UNIVERSAL BIN INDEX DECODING FOR THE WEAK USER§.§ Analysis for a General Decoding Metric and a Converse-Like ResultLet us first derive the exact random coding error exponent of the following bin index decoder,î() = *arg max_0 ≤ i ≤ M_z-1 F(,_i,𝒞_i),whereF(,_i,𝒞_i)△=1/M_y∑_j=0^M_y-1 e^nf(Q_U_iX_ijZ) ,and assume that f is upper bounded by a real number Δ. Note that (<ref>) includes the optimal ML decoder (<ref>) as a special case. To present the formula of E^*_z(R_y, R_z), the error exponent of (<ref>), we first need a few definitions. For a given generic joint distribution Q_UXZ, let I_Q(X;Z|U) denote the conditional mutual information between X and Z given U. For a given marginal Q_UZ, let 𝒮(Q_UZ) denote the set of conditional distributions {Q_X|UZ} such that ∑_z Q_UZ(u,z)Q_X|UZ(x|u,z) = P_UX(u,x) for every (u,x) ∈𝒰×𝒳, where P_UX = P_U× P_X|U. We first defineE_1(s, Q_UZ)=min_Q_X|UZ∈𝒮(Q_UZ){ [ I_Q (X;Z|U) - R_y]_+:f(Q_UXZ) +[R_y- I_Q (X;Z|U)]_+≥ s},where s is an arbitrary real. Next, for a given marginal Q_Z, defineE_2(s, Q_Z)=min_Q_U|Z∈𝒮(Q_Z)[I_Q (U;Z) +E_1(s, Q_UZ)],where the minimization is across all {Q_U|Z} such that ∑_zQ_Z(z)Q_U|Z(u|z)=P_U(u) for every u ∈𝒰. Finally, for a given Q_U_0Z, lets_0(Q_U_0Z) = R_y + max_{ Q_X|U_0Z∈𝒮(Q_U_0Z): I_Q (X;Z|U_0) ≤ R_y}[ f(Q_U_0XZ) - I_Q (X;Z|U_0) ],ands_1(Q_U_0X_00Z) = max{ s_0(Q_U_0Z), f(Q_U_0X_00Z)}.Now, the error exponent of the decoder (<ref>) is given in the following lemma. Lemma 2. Under the assumptions of Section 2, E^*_z(R_y, R_z) = min_Q_Z|U_0X_00{ D(Q_Z|U_0X_00||W_Z|X_00| P_U_0X_00) + [E_2( s_1(Q_U_0X_00Z), Q_Z) - R_z]_+},where (U_0,X_00) is a replica of (U,X), i.e., P_U_0X_00 = P_UX. Proof. The average probability of error, associated with (<ref>), is given byP_e^* =𝔼[ Pr{⋃_i=1^M_z-1{ F(𝐙,𝐔_i,𝒞_i) ≥F(𝐙,𝐔_0,𝒞_0)}}] ≐𝔼[ min{1, M_z·Pr{ F(𝐙,𝐔_1,𝒞_1) ≥F(𝐙,𝐔_0,𝒞_0)}}],where the expectation is w.r.t. the randomness of 𝐔_0, 𝒞_0 and 𝐙, where 𝐙 is the channel output in response to the input 𝐗_00 (the transmitted codeword without loss of generality). The passage from (<ref>) to (<ref>) is due to the exponential tightness of the truncated union bound.Here, for a given , Pr{ F(,𝐔_1,𝒞_1) ≥F(,_0,𝒞_0)} is calculated w.r.t. the randomness of 𝐔_1 and 𝒞_1 = {𝐗_1,0,...,𝐗_1,(M_y-1)}, but for a given _0 and 𝒞_0. Let N_1(Q_U_1X'Z) denote the number of codewords _1,j∈𝒞_1, such that the joint empirical distribution of_1,j with (_1, ) is Q_U_1X'Z, that isN_1(Q_U_1X'Z) = ∑_j=0^M_y-1ℐ{ (_1, _1,j, ) ∈𝒯(Q_U_1X'Z) }.Definings△=1/nln [∑_j=0^M_y-1e^nf(Q_U_0X_0jZ)],we have,Pr{ F(,_1,𝒞_1) ≥F(,_0,𝒞_0)} = Pr{ M_y· F(,_1,𝒞_1) ≥e^ns}= Pr{∑_j=0^M_y-1e^nf(Q_U_1X_1jZ)≥e^ns}= Pr{∑_Q_X'|U_1Z∈𝒮(Q_U_1Z) N_1(Q_U_1X'Z)e^nf(Q_U_1X'Z)≥e^ns}≐Pr{max_Q_X'|U_1Z∈𝒮(Q_U_1Z)N_1(Q_U_1X'Z)e^nf(Q_U_1X'Z)≥e^ns}= Pr{⋃_Q_X'|U_1Z∈𝒮(Q_U_1Z){N_1(Q_U_1X'Z)e^nf(Q_U_1X'Z)≥e^ns}}≐∑_Q_X'|U_1Z∈𝒮(Q_U_1Z)Pr{ N_1(Q_U_1X'Z)e^nf(Q_U_1X'Z)≥e^ns}≐max_Q_X'|U_1Z∈𝒮(Q_U_1Z)Pr{ N_1(Q_U_1X'Z)e^nf(Q_U_1X'Z)≥e^ns}.Now, for a given Q_U_1X'Z, designating the joint empirical distribution of a randomly chosen ' (given _1) together with (_1, ), the binomial random variable N_1(Q_U_1X'Z) has e^nR_y trials and probability of success which is of the exponential order of e^-n I_Q(X;Z|U). Thus, a standard large deviations analysis (see, e.g., <cit.>) yieldsPr{ N_1(Q_U_1X'Z)≥e^n [ s-f(Q_U_1X'Z) ] }≐ e^-nE_0(Q_U_1X'Z) ,whereE_0(Q_U_1X'Z) = {[[ I_Q(X;Z|U) - R_y]_+ f(Q_U_1X'Z) ≥ s - [R_y- I_Q(X;Z|U) ]_+;∞ f(Q_U_1X'Z)<s- [R_y- I_Q(X;Z|U) ]_+. ]. Therefore, max_Q_X'|U_1Z∈𝒮(Q_U_1Z)Pr{ N_1(Q_U_1X'Z)≥e^n [s-f(Q_U_1X'Z) ] } decays according to E_1(s, Q_U_1Z ) = min_Q_X'|U_1Z∈𝒮(Q_U_1Z) E_0(Q_U_1X'Z),which is given by (<ref>). The conditional pairwise error probability, given 𝐔_1=_1, is of the exponential order of e^-n E_1(s, Q_U_1Z ). Averaging w.r.t. the randomness of 𝐔_1, we get the exponential order of e^-n E_2(s, Q_Z ), where E_2(s, Q_Z ) is defined as in (<ref>). To see why this is true, consider the following:∑__1∈𝒯 (P_U)P_U(_1) ·Pr{ F(,_1,𝒞_1) ≥F(,_0,𝒞_0)}≐∑_ Q_U_1|Z∈𝒮(Q_Z) ∑__1∈𝒯(Q_U_1|Z|)P_U(_1) · e^-n · E_1(s, Q_U_1Z )≐∑_Q_U_1|Z∈𝒮(Q_Z)e^-n · E_1(s, Q_U_1Z )· e^-n · I_Q(U;Z)≐max_Q_U_1|Z∈𝒮(Q_Z)e^-n ·[I_Q(U;Z) +E_1(s, Q_U_1Z ) ]=e^-n · E_2(s, Q_Z ).Finally, we have thatP_e^* ≐𝔼[ min{ 1,M_z· e^-n · E_2(S, Q_Z )}] =𝔼{ e^-n [ E_2(S, Q_z ) - R_z]_+},where the expectation is w.r.t. the randomness ofS=1/nln [∑_j=0^M_y-1e^nf(Q_U_0X_0jZ)],the randomness of Q_Z, the empirical distribution of 𝐙, and 𝐔_0, the real cloud center.This expectation will be taken in two steps, the first is over the randomness of {𝐗_0,1,...,𝐗_0,(M_y-1)}, while 𝐗_00=_00, 𝐔_0=_0 and 𝐙= are held fixed, whereas in the second step, the expectation is over the randomness of 𝐗_00, 𝐔_0 and 𝐙.Let _00, _0 andbe given and let ϵ > 0 be arbitrarily small. Then,P_e^*( _00, _0, ) △=𝔼{e^-n [ E_2(S, Q_Z ) - R_z]_+|𝐗_00=_00, 𝐔_0=_0, 𝐙=}≤∑_iPr{iϵ≤ S < (i+1)ϵ|𝐗_00=_00, 𝐔_0=_0, 𝐙=}                     ×exp{ -n [ E_2(iϵ, Q_Z ) - R_z]_+} ,where i ranges from 1/ϵ f(Q_U_0X_00Z) to (R_y+ Δ )/ϵ. Now,e^nS =e^n f(Q_U_0X_00Z) + ∑_j=1^M_y-1e^nf(Q_U_0X_0jZ) = e^nf(Q_U_0X_00Z)+ ∑_Q_X'|U_0Z∈𝒮(Q_U_0Z) N_0(Q_U_0X'Z) e^nf(Q_U_0X'Z),where N_0(Q_U_0X'Z) is the number of codewords in 𝒞_0∖{_00}, whose joint empirical distribution with (_0,) is Q_U_0X'Z. On the one hand, we havePr{∑_Q_X'|U_0Z∈𝒮(Q_U_0Z) N_0(Q_U_0X'Z) e^nf(Q_U_0X'Z)≥ e^nt}≐ e^-n · E_1(t, Q_U_0Z ) ,and on the other hand, Pr{∑_Q_X'|U_0Z∈𝒮(Q_U_0Z) N_0(Q_U_0X'Z) e^nf(Q_U_0X'Z)≤ e^nt}≐Pr{⋂_ Q_X'|U_0Z∈𝒮(Q_U_0Z) {N_0(Q_U_0X'Z)≤e^n[t- f(Q_U_0X'Z) ]}}.This probability behaves exponentially like an indicator function of the condition that for every Q_X'|U_0Z∈𝒮(Q_U_0Z), either I_Q(X;Z|U_0) ≥ R_y or R_y - I_Q(X;Z|U_0) ≤ t-f(Q_U_0X'Z) <cit.>. I.e.,Pr{∑_Q_X'|U_0Z∈𝒮(Q_U_0Z) N_0(Q_U_0X'Z) e^nf(Q_U_0X'Z)≤ e^nt}≐ℐ{R_y≤min_ Q_X'|U_0Z∈𝒮(Q_U_0Z) {I_Q(X;Z|U_0)+ [t - f(Q_U_0X'Z) ]_+}}.Let us now find what is the minimum value of t for which the value of this indicator function is unity. The condition is equivalent tomin_ Q_X'|U_0Z∈𝒮(Q_U_0Z) max_0 ≤ a ≤ 1{ I_Q(X;Z|U_0)+ a[t- f(Q_U_0X'Z) ] }≥ R_y ,or∀Q_X'|U_0Z∈𝒮(Q_U_0Z)∃ a ∈ [0,1]: I_Q(X;Z|U_0)+ a[ t- f(Q_U_0X'Z) ]≥ R_y ,or∀ Q_X'|U_0Z∈𝒮(Q_U_0Z)∃ a ∈ [0,1]: t ≥ f(Q_U_0X'Z) + 1/a( R_y - I_Q(X;Z|U_0) ),or, equivalently, t≥max_ Q_X'|U_0Z∈𝒮(Q_U_0Z) min_0 ≤ a ≤ 1[f(Q_U_0X'Z)+ 1/a( R_y -I_Q(X;Z|U_0) ) ]= max_ Q_X'|U_0Z∈𝒮(Q_U_0Z) [ f(Q_U_0X'Z)+ {[R_y - I_Q(X;Z|U_0),R_y≥ I_Q(X;Z|U_0);-∞ ,R_y<I_Q(X;Z|U_0) ].] = R_y + max_{ Q_X'|U_0Z∈𝒮(Q_U_0Z): I_Q(X;Z|U_0) ≤ R_y}[ f(Q_U_0X'Z) - I_Q(X;Z|U_0) ]△=s_0(Q_U_0Z ).Thus, in summary, we have Pr{ e^nt≤∑_Q_X'|U_0Z∈𝒮(Q_U_0Z) N_0(Q_U_0X'Z) e^nf(Q_U_0X'Z)≤ e^n(t+ϵ)}≐{[ 0 t <s_0(Q_U_0Z ) - ϵ;e^-n · E_1(t, Q_U_0Z ) t ≥ s_0( Q_U_0Z ) ].Therefore, we get the expected error probabilityP_e^*(_00, _0,) ≤∑_iPr{e^niϵ≤∑_Q_X'|U_0Z∈𝒮(Q_U_0Z) N_0(Q_U_0X'Z) e^nf(Q_U_0X'Z)≤e^n(i+1)ϵ}                   ×exp{-n [ E_2(max{ iϵ, f(Q_U_0X_00Z)}, Q_Z) - R_z]_+}≐∑_i ≥ s_0( Q_U_0Z) / ϵexp{-n E_1(iϵ, Q_U_0Z )}                   ×exp{-n [ E_2(max{ iϵ, f(Q_U_0X_00Z)}, Q_Z) - R_z]_+} .Since the dominant contribution to the sum over i is due to the term i = s_0(Q_U_0Z) / ϵ (by the non-decreasing monotonicity of the functions E_1(·, Q_U_0Z ) and E_2(·, Q_Z )), we obtain P_e^*( _00, _0,) ≐exp{-n [ E_2(max{ s_0(Q_U_0Z ), f(Q_U_0X_00Z)}, Q_Z) - R_z]_+}△=exp{-n [ E_2( s_1(Q_U_0X_00Z) , Q_Z) - R_z]_+}.Now, after taking the expectation w.r.t. the joint distribution of (𝐔_0,𝐗_00, 𝐙), we get the exact random coding error exponent (<ref>), and the proof of Lemma 2 is complete.Next, we introduce the following converse-like result for the weak user. Lemma 3. For every empirical distribution Q_U_0X_00Z,E_2(s_1(Q_U_0X_00Z), Q_Z)≤ I_Q(U_0;Z)+[ I_Q(X_00;Z|U_0) - R_y]_+.Proof. By (<ref>),E_2(s_1(Q_U_0X_00Z), Q_Z) = min_Q_U|Z∈𝒮(Q_Z) {I_Q(U;Z) +E_1(s_1(Q_U_0X_00Z), Q_UZ )}≤ I_Q(U_0;Z) +E_1(s_1(Q_U_0X_00Z), Q_U_0Z ) ,whereE_1(s_1(Q_U_0X_00Z), Q_U_0Z) = min_ Q_X'|U_0Z∈𝒮(Q_U_0Z):   f(Q_U_0X'Z) +[R_y- I_Q(X';Z|U_0) ]_+≥ s_1(Q_U_0X_00Z)[ I_Q(X';Z|U_0) - R_y]_+ . Now, since s_1(Q_U_0X_00Z) is given by the maximum s_1(Q_U_0X_00Z) =max{ s_0(Q_U_0Z), f(Q_U_0X_00Z)} ,we treat each case separately. First, if s_1(Q_U_0X_00Z) =f(Q_U_0X_00Z), E_1( f(Q_U_0X_00Z), Q_U_0Z) = min_ Q_X'|U_0Z∈𝒮(Q_U_0Z):   f(Q_U_0X'Z) +[R_y- I_Q(X';Z|U_0) ]_+≥ f(Q_U_0X_00Z)[ I_Q(X';Z|U_0) - R_y]_+≤[ I_Q(X_00;Z|U_0) - R_y]_+ ,since the constraint is satisfied for Q_X_00|U_0Z∈𝒮(Q_U_0Z).On the other hand, if s_1(Q_U_0X_00Z) =s_0(Q_U_0Z), which is given by (<ref>), we haves_0(Q_U_0Z)= R_y +f(Q_U_0X̃Z) - I_Q(X̃;Z|U_0) ,where we have denoted the maximizer of (<ref>) by Q_X̃|U_0Z, for which I_Q(X̃;Z|U_0) ≤ R_y must be satisfied.Next, we upper bound the minimum defining E_1(s_0(Q_U_0Z), Q_U_0Z) by using the same empirical distribution which is the maximizer of the right hand side of the constraint, for which the constraint becomes an exact equality:E_1(s_0(Q_U_0Z), Q_U_0Z) = min_ Q_X'|U_0Z∈𝒮(Q_U_0Z):   f(Q_U_0X'Z) +[R_y- I_Q(X';Z|U_0) ]_+≥ s_0(Q_U_0Z)[ I_Q(X';Z|U_0) - R_y]_+= min_ Q_X'|U_0Z∈𝒮(Q_U_0Z):   f(Q_U_0X'Z) +[R_y- I_Q(X';Z|U_0) ]_+≥ f(Q_U_0X̃Z) + R_y- I_Q(X̃;Z|U_0)[ I_Q(X';Z|U_0) - R_y]_+≤[ I_Q(X̃;Z|U) - R_y]_+ =0,where the last equality is due to the constraint I_Q(X̃;Z|U) ≤ R_y. Combining the last two upper bounds, we get E_1(s_1(Q_U_0X_00Z), Q_U_0Z) ≤[ I_Q(X_00;Z|U_0) - R_y]_+,and thereforeE_2(s_1(Q_U_0X_00Z), Q_Z) ≤I_Q(U_0;Z)+[ I_Q(X_00;Z|U_0) - R_y]_+,completing the proof of Lemma 3. §.§ Analysis for a General Suboptimal Decoding Metric and a Direct PartLet us now derive the exact random coding error exponent of the following suboptimal bin index decoder:ĩ() = *arg max_0 ≤ i ≤ M_z-1{max_0 ≤ j ≤ M_y-1 f(Q_U_iX_ijZ) },and assume, as before, that the function f is upper bounded by Δ.To present the formula of Ẽ_z(R_y, R_z), the error exponent of (<ref>), we first need a few new definitions. We first defineẼ_1(t, Q_UZ)=min_Q_X|UZ∈𝒮(Q_UZ){ [ I_Q (X;Z|U) - R_y]_+: f(Q_UXZ) ≥ t},where t is an arbitrary real number. Next, for a given Q_Z, defineẼ_2(t, Q_Z)=min_Q_U|Z∈𝒮(Q_Z) [I_Q (U;Z) +Ẽ_1(t, Q_UZ)].Finally, lett_0(Q_U_0Z) = max_{ Q_X|U_0Z∈𝒮(Q_U_0Z): I_Q (X;Z|U_0) ≤ R_y} f(Q_U_0XZ) ,andt_1(Q_U_0X_00Z) = max{ t_0(Q_U_0Z), f(Q_U_0X_00Z)}.The error exponent of (<ref>) is given in the following lemma. Lemma 4. Under the assumptions of Section 2, Ẽ_z(R_y, R_z) = min_Q_Z|U_0X_00{ D(Q_Z|U_0X_00||W_Z|X_00| P_U_0X_00) + [ Ẽ_2(t_1(Q_U_0X_00Z), Q_Z) - R_z]_+},where (U_0,X_00) is a replica of (U,X), i.e., P_U_0X_00 = P_UX. Proof. The average probability of error, associated with (<ref>), is given byP_e^*≐𝔼[ min{ 1, M_z·𝔼( min[ 1, M_y·Pr{ f(Q_U_1X_10Z)≥max_∈𝒞_0f(Q_U_0XZ) }])}] ,where the inner expectation is w.r.t. the randomness of 𝐔_1, the outer expectation is w.r.t. the randomness of 𝐔_0, 𝒞_0 and 𝐙, the latter being the channel output in response to 𝐗_00. To see why this is true, observe that the average error probability, P̅_e(R_y,R_z,n), associated with (<ref>), is defined as P̅_e(R_y,R_z,n)△=1/M_y M_z∑_i=0^M_z-1∑_j=0^M_y-1Pr{⋃_l ≠ i⋃_k{ f(Q_U_lX_lkZ)≥max_∈𝒞_if(Q_U_iXZ)| 𝐗_ij sent}}where Pr{·} pertains to the randomness of the codebook as well as that of the channel output given its input.We define the following unions of events𝒢△=⋃_l=1^M_z-1𝒢_l△=⋃_l=1^M_z-1⋃_k=0^M_y-1𝒢_lk△=⋃_l=1^M_z-1⋃_k=0^M_y-1{ f(Q_U_lX_lkZ)≥max_∈𝒞_0 f(Q_U_0XZ)} .Define t△=max_∈𝒞_0 f(Q_U_0XZ).The probability of 𝒢_lk, conditioned on 𝐔_l, is given byPr(𝒢_lk | 𝐔_l = ')△=Pr{ f(Q_U_lX_lkZ)≥max_∈𝒞_0 f(Q_U_0XZ)| 𝐔_l = ' }= ∑_{': f(Q_U'X'Z) ≥ t } P ( ' | ' ) ≐max_{ Q_X'|U'Z∈𝒮(Q_U'Z) :  f(Q_U'X'Z)≥ t}exp{-n · I_ Q(X';Z|U') }= exp{-n ·min_{ Q_X'|U'Z∈𝒮(Q_U'Z) :  f(Q_U'X'Z)≥ t} I_ Q(X';Z|U') }△=exp{-n ·Ẽ_0(t, Q_U'Z )},where the passage from (<ref>) to (<ref>) is due to (<ref>)-(<ref>). For a given 𝐔_l = ', the events {𝒢_lk}_k are all pairwise independent since we have assumed that the various codewords are pairwise conditional independent given the cloud center. Using the exponential tightness of the truncated union bound, we getPr{⋃_k=0^M_y-1𝒢_lk| 𝐔_l = ' } ≐min{ 1, ∑_k=0^M_y-1Pr(𝒢_lk | 𝐔_l = ')}= min{ 1, M_y·Pr(𝒢_l,0 | 𝐔_l = ')}≐min{ 1, e^nR_y·exp{-n ·Ẽ_0(t, Q_U'Z )}}△=exp{-n ·Ẽ_1(t, Q_U'Z)},whereẼ_1(t, Q_U'Z ) =min_Q_X'|U'Z∈𝒮(Q_U'Z){[ I_ Q(X';Z|U') - R_y]_+:f(Q_U'X'Z)≥ t}.Next, we obtain the probability of 𝒢_l by calculating the expectation w.r.t. the randomness of 𝐔_l:Pr{𝒢_l} =∑_' ∈𝒯(P_U)P_U(')·Pr{⋃_k=0^M_y-1𝒢_lk| 𝐔_l = ' }≐∑_' ∈𝒯(P_U)P_U(')·exp{-n ·Ẽ_1(t, Q_U'Z )} ≐exp{-n ·min_{ Q_U'|Z∈𝒮(Q_Z) }[ I_ Q(U;Z)+ Ẽ_1(t, Q_U'Z ) ]}△=exp{-n ·Ẽ_2(t, Q_Z )} , where the passage from (<ref>) to (<ref>) is due to (<ref>)-(<ref>). Conditioning on 𝐔_0, 𝒞_0 and 𝐙, the events {𝒢_l} are all pairwise independent since the various cloud centers are all independent. We getPr{⋃_l=1^M_z-1𝒢_l| 𝐔_0, 𝒞_0, 𝐙} ≐min{ 1, ∑_l=1^M_z-1Pr(𝒢_l | 𝐔_0, 𝒞_0, 𝐙 ) }= min{ 1, (M_z-1) ·Pr(𝒢_1 | 𝐔_0, 𝒞_0, 𝐙 ) }≐min{ 1, e^nR_z·exp{-n ·Ẽ_2(t, Q_Z )}}= exp{-n ·[Ẽ_2(t, Q_Z )- R_z]_+} .Finally, we have thatP̃_e = 𝔼[ exp{-n ·[Ẽ_2(T, Q_Z )- R_z]_+}],where the expectation is taken w.r.t. the randomness of T= max_∈𝒞_0 f(Q_U_0XZ),and the randomness of Q_Z and U_0, the correct cloud center.This expectation will be taken in two steps, first, over the randomness of {𝐗_0,1,...,𝐗_0,(M_y-1)}, while 𝐗_00=_00, 𝐔_0=_0 and 𝐙= are held fixed, and then - over the randomness of 𝐗_00, 𝐔_0 and 𝐙. Let _00, _0 andbe given and let ϵ > 0 be arbitrarily small. Then,P̃_e( _00, _0, ) △=𝔼{e^-n [ Ẽ_2(T, Q_Z ) - R_z]_+|𝐗_00=_00, 𝐔_0=_0, 𝐙=}≤∑_iPr{iϵ≤T< (i+1)ϵ|𝐗_00=_00, 𝐔_0=_0, 𝐙=}                     ×exp{ -n [ Ẽ_2(iϵ, Q_Z ) - R_z]_+} ,where i ranges from 1/ϵ f(Q_U_0X_00Z) to Δ /ϵ. Now,T= max{f(Q_U_0X_00Z),max_1 ≤ j ≤ M_y-1f(Q_U_0X_0jZ) }.On the one hand, we have:Pr{max_1 ≤ j ≤ M_y-1f(Q_U_0X_0jZ)≥ t } = Pr{⋃_1 ≤ j ≤ M_y-1{f(Q_U_0X_0jZ)≥ t }}≐min{1,(M_y - 1)·Pr{f(Q_U_0X_01Z)≥t }}≐min{1,e^nR_y·exp{-n ·Ẽ_0(t, Q_U_0Z) }}= exp{ -n [ Ẽ_0(t, Q_U_0Z) - R_y ]_+}= exp{ -n ·Ẽ_1(t, Q_U_0Z) } .On the other hand,Pr{max_1 ≤ j ≤ M_y-1f(Q_U_0X_0jZ)< t } = Pr{⋂_1 ≤ j ≤ M_y-1 {f(Q_U_0X_0jZ)< t }}=[Pr{f(Q_U_0X_01Z)< t }] ^M_y - 1≐ [ 1- e^-n ·Ẽ_0(t, Q_U_0Z)]^e^nR_y=exp{e^nR_y·ln [ 1 -e^-n ·Ẽ_0(t, Q_U_0Z)]}≐exp{- e^n · [R_y -Ẽ_0(t, Q_U_0Z) ] }≐{[ 0,R_y >Ẽ_0(t, Q_U_0Z); 1, R_y <Ẽ_0(t, Q_U_0Z), ].which can also be written as: Pr{max_1 ≤ j ≤ M_y-1f(Q_U_0X_0jZ)< t }≐ℐ{R_y <Ẽ_0(t, Q_U_0Z)}.Let us now find the minimum t for which the value of this indicator function is unity. The condition is equivalent tomin_ Q_X'|U_0Z∈𝒮(Q_U_0Z) max_0 ≤ a < ∞{ I_Q(X';Z|U_0)+ a[t- f(Q_U_0X'Z) ] }≥ R_y ,or∀Q_X'|U_0Z∈𝒮(Q_U_0Z)  ∃ a ∈ [0, ∞ ):   I_Q(X';Z|U_0)+ a[ t- f(Q_U_0X'Z) ]≥ R_y ,or∀ Q_X'|U_0Z∈𝒮(Q_U_0Z)  ∃ a ∈ [0, ∞ ):  t ≥ f(Q_U_0X'Z) + 1/a( R_y - I_Q(X';Z|U_0) ),or, equivalently, t≥max_ Q_X'|U_0Z∈𝒮(Q_U_0Z) min_0 ≤ a < ∞[f(Q_U_0X'Z)+ 1/a( R_y -I_Q(X';Z|U_0) ) ]= max_ Q_X'|U_0Z∈𝒮(Q_U_0Z) [ f(Q_U_0X'Z)+ {[ 0 ,R_y≥ I_Q(X';Z|U_0);-∞ , R_y<I_Q(X';Z|U_0) ].] =max_{ Q_X'|U_0Z∈𝒮(Q_U_0Z): I_Q(X';Z|U_0) ≤ R_y}f(Q_U_0X'Z) △=t_0(Q_U_0Z ).Thus, in summary, we have Pr{ t≤max_1 ≤ j ≤ M_y-1f(Q_U_0X_0jZ)≤ t+ϵ}≐{[0 ,t <t_0(Q_U_0Z ) - ϵ;e^-n ·Ẽ_1(t, Q_U_0Z ) ,t ≥ t_0( Q_U_0Z ) ].Then, the expected error probability w.r.t. {𝐗_0,1,...,𝐗_0,(M_y-1)} yieldsP̃_e( _00, _0, )≐∑_iPr{ iϵ≤max_1 ≤ j ≤ M_y-1f(Q_U_0X_0jZ)< (i+1)ϵ}×exp{ -n [ Ẽ_2(max{f(Q_U_0X_00Z), iϵ}, Q_Z) - R_z]_+}≐∑_i ≥ t_0(Q_U_0Z) / ϵexp{-n Ẽ_1(iϵ, Q_U_0Z)}×exp{ -n [ Ẽ_2(max{f(Q_U_0X_00Z), iϵ}, Q_Z) - R_z]_+}.Since the dominant contribution to the sum over i is due to the term i = t_0(Q_U_0Z) / ϵ (by the non-decreasing monotonicity of the functions Ẽ_1(·, Q_U_0Z ) and Ẽ_2(·, Q_Z )), we obtain P̃_e( _00, _0,) ≐exp{-n [ Ẽ_2(max{ t_0(Q_U_0Z ), f(Q_U_0X_00Z)}, Q_Z) - R_z]_+}△=exp{-n [ Ẽ_2( t_1(Q_U_0X_00Z) , Q_Z) - R_z]_+}.Now, after taking the expectation w.r.t. the joint distribution of (𝐔_0,𝐗_00, 𝐙), we getẼ_z(R_y, R_z) = min_Q_Z|U_0X_00{ D(Q_Z|U_0X_00||W_Z|X_00| P_U_0X_00) + [ Ẽ_2(t_1(Q_U_0X_00Z), Q_Z) - R_z]_+},and the proof of Lemma 4 is complete.Let us now selectf(Q_UXZ) = I_Q(U;Z)+ [ I_Q(X;Z|U) - R_y]_+.We show that (<ref>) achieves the maximum of E_2(s_1(Q_U_0X_00Z), Q_Z), as given by Lemma 3, and therefore, this decoder has the same error exponent as that of the optimal decoder. First, the threshold t_0(Q_U_0Z) can be easily simplified ast_0(Q_U_0Z)=max_{ Q_X|U_0Z∈𝒮(Q_U_0Z): I_Q (X;Z|U_0) ≤ R_y}f(Q_U_0XZ)=max_{ Q_X|U_0Z∈𝒮(Q_U_0Z): I_Q (X;Z|U_0) ≤ R_y}{ I_Q(U_0;Z)+ [ I_Q(X;Z|U_0) - R_y]_+}=I_Q(U_0;Z)+max_{ Q_X|U_0Z∈𝒮(Q_U_0Z): I_Q (X;Z|U_0) ≤ R_y}[ I_Q(X;Z|U_0) - R_y]_+=I_Q(U_0;Z) .Now, t_1(Q_U_0X_00Z) is given by t_1(Q_U_0X_00Z)= max{ I_Q(U_0;Z), I_Q(U_0;Z)+ [ I_Q(X_00;Z|U_0) - R_y]_+}= I_Q(U_0;Z)+ [ I_Q(X_00;Z|U_0) - R_y]_+.In general, the constraint of the inner minimization problem defining Ẽ_2(t_1(Q_U_0X_00Z), Q_Z) is given byf(Q_UXZ)≥t_1(Q_U_0X_00Z),which can now be written asI_Q(U;Z)+ [ I_Q(X;Z|U) - R_y]_+≥ I_Q(U_0;Z)+ [ I_Q(X_00;Z|U_0) - R_y]_+, or simply by f(Q_UXZ)≥ f(Q_U_0X_00Z). Eventually, we have the followingẼ_2(t_1(Q_U_0X_00Z), Q_Z) = min_Q_UX|Z∈𝒮(Q_Z):  f(Q_UXZ)≥ f(Q_U_0X_00Z)f(Q_UXZ) = I_Q(U_0;Z)+ [ I_Q(X_00;Z|U_0) - R_y]_+,which is the same expression as on the right hand side of (<ref>). § GALLAGER-STYLE LOWER BOUNDSIn this section, we prove Theorem 5.§.§ Derivation of eq. (<ref>)We start by changing the clipping operator to a maximization problem and using convexity properties to change the order of the maximization and the minimization:Ê_y,1( R_y ) △=min_ V {D( VW |P )+ [ I(X;Y|U) - R_y]_+}=min_ V {D( VW |P )+max_ρ∈ [0,1] {ρ·[ I(X;Y|U) - R_y] }}=max_ρ∈ [0,1] { - ρ R_y+min_ V {D( VW |P )+ ρ·I(X;Y|U)}}.Next,min_ V {D( VW |P )+ ρ·I(X;Y|U)}= min_ V,Q {∑_x,uP(x,u)∑_y V(y|x,u)logV(y|x,u)/W(y|x)                                + ρ∑_x,u,y P(x,u)V(y|x,u) logV(y|x,u)/Q(y|u)}= min_ V,Q {∑_x,u,y P(x,u)V(y|x,u) [logV(y|x,u)/W(y|x)+ ρlogV(y|x,u)/Q(y|u)]}= min_ V,Q {∑_x,u,y P(x,u)V(y|x,u) log[ V^1+ρ(y|x,u)/W(y|x)Q^ρ(y|u)]}.First, we minimize over the auxiliary channel V. Holding the auxiliary channel Q fixed, and differentiating w.r.t. V(y|x,u), we find that the minimizing distribution is given byV^*(y|x,u) =W^1/1+ρ(y|x)Q^ρ/1+ρ(y|u) /∑_y' W^1/1+ρ(y'|x)Q^ρ/1+ρ(y'|u) .Substituting it back into (<ref>) and summing over y, we get thatmin_ Q {∑_x,u,y P(x,u)V^*(y|x,u) log[[V^*(y|x,u)]^1+ρ/W(y|x)Q^ρ(y|u)]}=min_ Q { -(1+ρ) ∑_x,u P(x,u) log[ ∑_y W^1/1+ρ(y|x)Q^ρ/1+ρ(y|u)]}=min_ Q { -(1+ρ) ∑_u P(u)∑_x P(x|u) log[ ∑_y W^1/1+ρ(y|x)Q^ρ/1+ρ(y|u)]}≥min_ Q { -(1+ρ) ∑_u P(u) log[ ∑_x P(x|u) ∑_y W^1/1+ρ(y|x)Q^ρ/1+ρ(y|u)]},where inequality (<ref>) is due to Jensen's inequality. Next, we minimize the lower bound over Q. Differentiating the last expression w.r.t. Q(y|u), we find that the minimizing distribution is given byQ^*(y|u) = [ Φ(u,y,ρ) ]^1+ρ/∑_y'[ Φ(u,y',ρ) ]^1+ρ. Substituting (<ref>) into (<ref>), we get-(1+ρ) ∑_u P(u) log[ ∑_x P(x|u) ∑_y W^1/1+ρ(y|x)[Q^*(y|u)]^ρ/1+ρ] = -(1+ρ) ∑_u P(u)log[ ∑_x P(x|u)∑_y W^1/1+ρ(y|x) [ Φ(u,y,ρ) ]^ρ/{∑_y'[ Φ(u,y',ρ) ]^1+ρ}^ρ/1+ρ] = -(1+ρ) ∑_u P(u) log[∑_y(Φ(u,y,ρ)·[ Φ(u,y,ρ) ]^ρ) /{∑_y'[ Φ(u,y',ρ) ]^1+ρ}^ρ/1+ρ] = -(1+ρ) ∑_u P(u) log[∑_y[ Φ(u,y,ρ) ]^1+ρ/{∑_y'[ Φ(u,y',ρ) ]^1+ρ}^ρ/1+ρ]= -(1+ρ) ∑_u P(u) log[{∑_y[ Φ(u,y,ρ) ]^1+ρ}^1/1+ρ] = -∑_u P(u) log{∑_y[ Φ(u,y,ρ) ]^1+ρ},which completes the proof of eq. (<ref>).§.§ Derivation of eq. (<ref>) Similarly as in (<ref>)-(<ref>),Ê_y,2( R_y, R_z )△=min_ V {D( VW |P )+ [ I(U;Y) + [ I(X;Y|U)-R_y]_+ - R_z]_+}=min_ V {D( VW |P )+max_μ∈ [0,1] {μ·[ I(U;Y) + max_ρ∈ [0,1] {ρ·[ I(X;Y|U) - R_y] } - R_z] }}=min_ V {D( VW |P )+max_μ∈ [0,1] max_ρ∈ [0,1] {μ·[ I(U;Y) - R_z]+ μρ·[ I(X;Y|U) - R_y] }}=min_ V {D( VW |P )+max_μ∈ [0,1] max_λ∈ [0,μ] {μ·[ I(U;Y) - R_z]+ λ·[ I(X;Y|U) - R_y] }}=max_μ∈ [0,1] max_λ∈ [0,μ] { - λ R_y - μ R_z+min_ V {D( VW |P )+μ·I(U;Y) + λ·I(X;Y|U)}}.Now, for the inner-most minimization,min_ V {D( VW |P )+μ·I(U;Y) + λ·I(X;Y|U) }= min_ V,Q,T {∑_x,uP(x,u)∑_y V(y|x,u)logV(y|x,u)/W(y|x)+ μ∑_x,u,y P(x,u)V(y|x,u) logQ(y|u)/T(y) + λ∑_x,u,y P(x,u)V(y|x,u) logV(y|x,u)/Q(y|u)}= min_ V,Q,T {∑_x,u,y P(x,u)V(y|x,u) [logV(y|x,u)/W(y|x)+ μlogQ(y|u)/T(y) + λlogV(y|x,u)/Q(y|u)]}= min_ V,Q,T {∑_x,u,y P(x,u)V(y|x,u) log[ V^1+λ(y|x,u)/W(y|x) T^μ(y) Q^λ - μ(y|u)]}.First, we minimize over V. Holding Q and T fixed, and differentiating w.r.t. V(y|x,u), we find that the minimizing V is given byV^*(y|x,u) =W^1/1+λ(y|x) T^μ/1+λ(y) Q^λ - μ/1+λ(y|u) /∑_y' W^1/1+λ(y'|x) T^μ/1+λ(y') Q^λ - μ/1+λ(y'|u) .Substituting (<ref>) into (<ref>) and summing over y, we getmin_ Q,T {∑_x,u,y P(x,u)V^*(y|x,u) log[[V^*(y|x,u)]^1+λ/W(y|x) T^μ(y) Q^λ - μ(y|u)]}=min_ Q,T { -(1+ λ) ∑_x,u P(x,u) log[ ∑_y W^1/1+λ(y|x) T^μ/1+λ(y) Q^λ - μ/1+λ(y|u)]}=min_ Q,T { -(1+ λ) ∑_u P(u)∑_x P(x|u) log[ ∑_y W^1/1+λ(y|x) T^μ/1+λ(y) Q^λ - μ/1+λ(y|u)]}≥min_ Q,T { -(1+ λ) ∑_u P(u) log[ ∑_x P(x|u) ∑_y W^1/1+λ(y|x) T^μ/1+λ(y) Q^λ - μ/1+λ(y|u)]},where (<ref>) is due to Jensen's inequality. Next, we minimize the lower bound over Q, while holding T fixed. Differentiating the last expression w.r.t. Q(y|u), we find that the minimizing Q is given byQ^*(y|u) = [ T^μ/1+λ(y)∑_xP(x|u)W^1/1+λ(y|x) ]^1+ λ/1 + μ/∑_y'[ T^μ/1+λ(y') ∑_x'P(x'|u)W^1/1+λ(y'|x') ]^1+ λ/1 + μ . Substituting into (<ref>), we havemin_ T {-(1+ λ) ∑_u P(u) log[ ∑_x P(x|u) ∑_y W^1/1+λ(y|x) T^μ/1+λ(y) [Q^*(y|u)]^λ - μ/1+λ] }=min_ T- (1+ λ) ∑_u P(u) log[ ∑_x P(x|u).                        . ×∑_y W^1/1+λ(y|x) T^μ/1+λ(y) [ T^μ/1+λ(y)Φ(u,y,λ) ]^1+ λ/1 + μ·λ - μ/1+λ/{∑_y'[ T^μ/1+λ(y') Φ(u,y',λ) ]^1+ λ/1 + μ} ^λ - μ/1+λ] = min_ T -(1+ λ) ∑_u P(u)log[∑_y[T^μ/1+λ(y)Φ(u,y,λ)] ·[T^μ/1+λ(y)Φ(u,y,λ)]^λ - μ/1+μ/{∑_y'[ T^μ/1+λ(y') Φ(u,y',λ) ]^1+ λ/1 + μ} ^λ - μ/1+λ]= min_ T -(1+ λ) ∑_u P(u) log[∑_y[T^μ/1+λ(y)Φ(u,y,λ) ]^ 1 + λ/1+μ/{∑_y'[ T^μ/1+λ(y') Φ(u,y',λ) ]^1+ λ/1 + μ} ^λ - μ/1+λ] = min_ T -(1+ λ) ∑_u P(u) log[{∑_y[ T^μ/1+λ(y) Φ(u,y,λ) ]^1+ λ/1 + μ} ^ 1 + μ/1+λ] = min_ T -(1+ μ ) ∑_u P(u) log[∑_y[ T^μ/1+λ(y) Φ(u,y,λ) ]^1+ λ/1 + μ]= min_ T -(1+ μ ) ∑_u P(u) log[∑_y T^μ/1+μ(y)[ Φ(u,y,λ) ]^1+ λ/1 + μ]≥min_ T -(1+ μ )log[ ∑_u P(u) ∑_y T^μ/1+μ(y)[Φ(u,y,λ) ]^1+ λ/1 + μ] .Next, we minimize over T. Differentiating w.r.t. T(y), we getT^*(y) = {∑_u P(u)[ Φ(u,y,λ) ]^1+ λ/1 + μ}^1 + μ/∑_y'{∑_u' P(u')[ Φ(u',y',λ) ]^1+ λ/1 + μ}^1 + μ . Substituting into (<ref>), we finally get-(1+ μ )log[ ∑_u P(u) ∑_y [T^*(y)]^μ/1+μ[Φ(u,y,λ) ]^1+ λ/1 + μ]=-(1+ μ )log[ ∑_u P(u) ∑_y[Φ(u,y,λ) ]^1+ λ/1 + μ.                                             .×{∑_ũ P(ũ)[ Φ(ũ,y,λ) ]^1+ λ/1 + μ}^μ/{∑_y'{∑_u' P(u')[ Φ(u',y',λ) ]^1+ λ/1 + μ}^1 + μ} ^μ/1+μ] =-(1+ μ )log[ ∑_y( {∑_u P(u)[ Φ(u,y,λ) ]^1+ λ/1 + μ}·{∑_ũ P(ũ)[Φ(ũ,y,λ) ]^1+ λ/1 + μ}^μ) /{∑_y'{∑_u' P(u')[ Φ(u',y',λ) ]^1+ λ/1 + μ}^1 + μ} ^μ/1+μ] =-(1+ μ )log[ ∑_y{∑_u P(u)[ Φ(u,y,λ) ]^1+ λ/1 + μ}^1 +μ/{∑_y'{∑_u' P(u')[Φ(u',y',λ) ]^1+ λ/1 + μ}^1 + μ} ^μ/1+μ] =-(1+ μ )log{∑_y{∑_u P(u)[Φ(u,y,λ) ]^1+ λ/1 + μ}^1 + μ} ^1/1+μ= -log∑_y{∑_u P(u)[Φ(u,y,λ) ]^1+ λ/1 + μ}^1 + μ. Hence, (<ref>) is now proved, as well as the lower bound given in (<ref>).§ ANALYZING THE GALLAGER-STYLE LOWER BOUNDS §.§ A Study for E_y,1(R_y) As in the single user case, we expect to find a critical rate and a maximal rate. By maximal rate, that will be denoted by R_max,we mean sup{R_y: E_y,1(R_y)>0 }.By critical rate, to be denoted by R_, we mean the boundary between the range where E_y,1(R_y) is affine and the range where it is curvy. Let E_y,1(R_y) = max_ρ∈ [0,1] { E_0(ρ)- ρ R_y} ,where we have definedE_0(ρ) =- ∑_u P(u) log∑_y[ ∑_x P(x|u)W^1/1+ρ(y|x) ]^1+ρ.Setting the partial derivative of the bracketed part of (<ref>) equal to 0, we getR_y =∂ E_0(ρ)/∂ρ .Following the same considerations as in <cit.>, if some ρ∈ [0,1] satisfies (<ref>), then it must maximize (<ref>). It turns out that a solution to (<ref>) exists if.∂ E_0(ρ)/∂ρ |_ρ=1≤R_y≤.∂ E_0(ρ)/∂ρ |_ρ=0 .In this range, it is convenient to use (<ref>) to relate E_y,1(R_y) and R_y parametrically as functions of ρ. For the interval 0 ≤ρ≤ 1, this givesE_y,1(R_y)= E_0(ρ)- ρ·∂ E_0(ρ)/∂ρ,R_y =∂ E_0(ρ)/∂ρ .For R_y < ∂ E_0(ρ) / ∂ρ |_ρ = 1, the parametric equations are not valid. In this case, the maximum occurs at ρ = 1. Thus, E_y,1(R_y) is affine with slope -1:E_y,1(R_y)= E_0(1)-R_y,where E_0(1) =- ∑_u P(u) log∑_y[ ∑_x P(x|u) √( W(y|x) )]^2.Now, we can find R_max and R_, which are given by the right-most side and the left-most side of (<ref>), respectively. Differentiating E_0(ρ) w.r.t. ρ and substituting ρ = 0 gives R_max =.∂ E_0(ρ)/∂ρ |_ρ=0=I_P,W(X;Y|U),where I_P,W(X;Y|U) is the conditional mutual information induced by the channel W(y|x) and the code distribution P(u,x). Next, define F(u,y) = ∑_x P(x|u) √( W(y|x) ),and G(u,y) = ∑_x P(x|u) √( W(y|x) )log W(y|x) .After some algebra, we find thatR_ =.∂ E_0(ρ)/∂ρ |_ρ=1= - ∑_u P(u) ∑_y[ F^2(u,y) log F(u,y) - 1/2 F(u,y) G(u,y)]/∑_y'F^2(u,y').§.§ A Study for E_y,2(R_y, R_z)Let E_y,2(R_y, R_z)= max_μ∈ [0,1] max_λ∈ [0,μ] { -log∑_y(∑_u P(u)[ ∑_x P(x|u)W^1/1+λ(y|x) ]^1+λ/1+μ)^1+μ                                                              - λ R_y-μ R_z}= max_μ∈ [0,1] max_s ∈ [0,1] {-log∑_y(∑_u P(u)[ ∑_x P(x|u)W^1/1+ s μ(y|x) ]^1+ s μ/1+μ)^1+μ                                                               - s μ R_y-μ R_z} ,and defineE_1(s, μ)=-log∑_y{∑_u P(u)[ ∑_x P(x|u)W^1/1+s μ(y|x) ]^1+s μ/1+μ}^1+μ,such thatE_y,2(R_y, R_z) = max_μ∈ [0,1] max_s ∈ [0,1] { E_1(s, μ)- s μ R_y-μ R_z} .Setting the partial derivatives of the bracketed part of (<ref>) to zero, we get∂/∂ sE_1(s, μ ) = μ R_y ∂/∂μE_1(s, μ ) = s R_y + R_z,or, equivalently,R_y = 1/μ·∂/∂ sE_1(s, μ)R_z =∂/∂μE_1(s, μ)-s/μ·∂/∂ sE_1(s, μ).Now, if some (μ, s) ∈ [0,1]^2 satisfies (<ref>) and (<ref>), we may relate E_y,2(R_y, R_z), R_y and R_z parametrically as functions of s and μ. This givesE_y,2(R_y, R_z)=E_1(s, μ)-μ·∂/∂μ E_1(s, μ)R_y =1/μ·∂/∂ sE_1(s, μ)R_z =∂/∂μ E_1(s, μ)- s/μ·∂/∂ sE_1(s, μ) .For explicit expressions for the partial derivatives of E_1(s, μ) w.r.t. s and μ, we first define A(y,s,μ)= ∑_u P(u)[ ∑_x P(x|u)W^1/1+s μ(y|x) ]^1+s μ/1+μ B(u,y,s,μ)= ∑_x P(x|u)W^1/1+s μ(y|x) E(u,y,s,μ)= ∑_x P(x|u)W^1/1+s μ(y|x)log W(y|x) C(y,s,μ)= ∑_u P(u)[B(u,y,s,μ)]^1+s μ/1+μ( s-1/1+μ·log B(u,y,s,μ) - s/1+s μ·E(u,y,s,μ)/B(u,y,s,μ))D(y,s,μ)= ∑_u P(u)[B(u,y,s,μ)]^1+s μ/1+μ(log B(u,y,s,μ) - 1/1+s μ·E(u,y,s,μ)/B(u,y,s,μ)),and get that the partial derivative w.r.t. s is given by∂/∂ sE_1(s, μ) =- μ·∑_y A^μ(y,s,μ)D(y,s,μ) /∑_y' A^1+μ(y',s,μ) ,and the partial derivatives w.r.t. μ is given by∂/∂μE_1(s, μ) =-∑_y A^1+μ(y,s,μ) ·[ log A(y,s,μ) + C(y,s,μ)/A(y,s,μ)] /∑_y' A^1+μ(y',s,μ) .Consider the rate pair (R_y,R_z) for which both (<ref>) and (<ref>) hold with s=μ=1:R_y =. ∂/∂ sE_1(s, 1)|_s=1 R_y + R_z =.∂/∂μE_1(1, μ) |_μ=1,which is the corner point of the affine rate region. As an immediate consequence, due to the monotonicity of E_1, we get that for low rates, i.e., for rate pairs (R_y,R_z) that satisfy R_y≤. ∂/∂ sE_1(s, 1)|_s=1 ,andR_y + R_z≤. ∂/∂μE_1(1, μ)|_μ=1, the maximizers are s^*=μ^*=1, and E_y,2(R_y, R_z) is given byE_y,2(R_y, R_z) = E_1(1, 1)- ( R_y +R_z ),whereE_1(1, 1)=-log∑_y{∑_x P(x) √(W(y|x))}^2.According to (<ref>), we can find the maximal sum-rate in the affine region. Let F(y) = ∑_x P(x) √( W(y|x) ),and G(y) = ∑_x P(x) √( W(y|x) )log W(y|x) .After some algebra, we find that the maximal sum-rate is given byR_y + R_z≤. ∂/∂μE_1(1, μ)|_μ=1 = -∑_y[ F^2(y) log F(y) - 1/2 F(y) G(y)]/∑_y'F^2(y').The error exponent E_y,2(R_y, R_z) depends solely on R_z+R_y if and only if the maximizing s ∈ [0,1] is s^*=1. In this case,E_y,2(R_y, R_z) = max_μ∈ [0,1] {-log∑_y{∑_u P(u)[ ∑_x P(x|u)W^1/1+ μ(y|x) ]^1+ μ/1+μ}^1+μ-μ R_y-μ R_z}= max_μ∈ [0,1] {-log∑_y{∑_u P(u)∑_x P(x|u)W^1/1+ μ(y|x)}^1+μ-μ ( R_y+R_z ) }= max_μ∈ [0,1] {-log∑_y{∑_x P(x) W^1/1+ μ(y|x)}^1+μ-μ ( R_y+R_z ) },which means that E_y,2(R_y, R_z) = E_(R_y+ R_z, P_X), i.e., the ordinary random coding error exponent at rate R_y+R_z for an i.i.d. drawn code with distribution P_X. Next, we find the rate region for which E_y,2(R_y, R_z) depends solely on R_z+R_y, but it is not affine in the sum-rate.Consider the rate pairs (R_y,R_z) for which both (<ref>) and (<ref>) hold with s=1:μ R_y =. ∂/∂ sE_1(s, μ)|_s=1R_y + R_z = ∂/∂μE_1(1, μ).Let Γ_S1 denote the curve given by eqs. (<ref>)-(<ref>):Γ_S1 = {(R_y,R_z) | R_y =1/μ·. ∂/∂ sE_1(s, μ)|_s=1,R_y + R_z = ∂/∂μE_1(1, μ),0< μ≤ 1}.Now, in the set of (R_y,R_z), with . ∂/∂μE_1(1, μ)|_μ=1≤R_y + R_z≤. ∂/∂μE_1(1, μ)|_μ=0,and being underneath the curve Γ_S1, the maximizer is s^*=1, and E_y,2(R_y, R_z) is given by (<ref>). Notice that the left-hand-side and the right-hand-side of (<ref>) are expressions for the critical-rate and the maximal-rate for the channel W_1(y|x), respectively, and hence, the latter cannot be smaller than the former.Before moving forward, let us obtain a simple information-theoretic expression for the maximal sum-rate. According to the right-hand side of (<ref>), we only have to differentiate E_1(1, μ) w.r.t. μ and then substitute μ=0. We getR_y + R_z≤. ∂/∂μE_1(1, μ)|_μ=0 =I_P_X,W(X;Y) ,where I_P_X,W(X;Y) is the mutual information induced by the channel W_1(y|x) and the code distribution P(x). Let us now turn to the other extreme, where E_y,2(R_y, R_z) depends solely on R_z. This happens if and only if the maximizing s ∈ [0,1] is s^*=0. In this case,E_y,2(R_y, R_z) = max_μ∈ [0,1] {-log∑_y{∑_u P(u)[ ∑_x P(x|u)W(y|x) ]^1/1+μ}^1+μ -μ R_z}= max_μ∈ [0,1] {-log∑_y{∑_u P(u)V^1/1+μ(y|u)}^1+μ -μ R_z},which means that E_y,2(R_y, R_z) = E_(R_z, P_U), i.e., the ordinary random coding error exponent at rate R_z for an i.i.d. code with distribution P_U, where V is defined to be the equivalent channel from U to Y. The simple explanation for the fact that E_y,2(R_y, R_z) becomes independent of R_y, for high R_y, is that the satellite codewords behave like pure noise. Next, we find the region where E_y,2(R_y, R_z) depends solely on R_z.Consider the rate pairs (R_y,R_z) for which both (<ref>) and (<ref>) hold with s=0:μ R_y =. ∂/∂ sE_1(s, μ)|_s=0 R_z = ∂/∂μE_1(0, μ),Let Γ_S0 denote the curve given by eqs. (<ref>)-(<ref>):Γ_S0 = {(R_y,R_z) | R_y =1/μ·. ∂/∂ sE_1(s, μ)|_s=0,R_z = ∂/∂μE_1(0, μ), 0< μ≤ 1}.In addition, we have the following corner point for μ = 1: (R̃_y,R̃_z) = ( . ∂/∂ sE_1(s, 1)|_s=0, .∂/∂μE_1(0, μ)|_μ = 1),and we use it to define the straight line connecting that corner point to the R_y-axis: Γ̃_S0 = {(R_y,R_z) | R_y =R̃_y, 0 ≤R_z≤R̃_z},which is the set of all (R_y,R_z) for which the maximizers are s^*=0 and μ^*=1. Let Γ̂_S0 be defined by Γ_S0∪Γ̃_S0. In fact, the curve Γ̂_S0 is the borderline between the region where E_y,2(R_y, R_z) depends on R_y (affine or curvy) and the region where E_y,2(R_y, R_z) is independent of R_y. The set of all (R_y,R_z) that are above the curve Γ̂_S0 defines the region where E_y,2(R_y, R_z) is independent of R_y. In addition, let us obtain a simple informational expression for the maximum of R_z. According to (<ref>), we only have to differentiate E_1(0, μ) w.r.t. μ and then substitute μ=0. We getR_z≤. ∂/∂μE_1(0, μ)|_μ=0 =I_P_U,V(U;Y) ,where I_P_U,V(U;Y) is the mutual information induced by the channel {V(y|u)} and {P(u)}. In the region R_z≤. ∂/∂μE_1(0, μ)|_μ=1 ,the maximizer is μ^* = 1, and E_y,2(R_y, R_z) is affine in R_z and is given byE_y,2(R_y, R_z) = E_1(0, 1)-R_z ,whereE_1(0, 1)=-log∑_y{∑_u P(u) √(V(y|u))}^2.The third region is the set of all (R_y,R_z) for which the maximizing s is in (0,1). In this case, we use (<ref>)-(<ref>), which hold for every s ∈ (0,1) and μ∈ [0,1] such that both (<ref>) and (<ref>) are satisfied. This region can be devided into two complementary regions. In the first one, the maximizer is μ^* = 1, and E_y,2(R_y, R_z) is affine in R_z and curvy in R_y, while in the second one, the maximizer μ^* is in (0,1), and E_y,2(R_y, R_z) is curvy in both R_z and R_y. The borderline between those two regions is given by the curveΓ_μ 1 = {(R_y,R_z) | R_y =∂/∂ sE_1(s, 1) ,sR_y + R_z =. ∂/∂μE_1(s, μ)|_μ=1,0 ≤ s ≤ 1}.For R_z ≥I_P_U,V(U;Y)R_y + R_z ≥I_P_X,W(X;Y) ,the maximizers are s^*=μ^*=0, and then E_y,2(R_y, R_z) = 0.§ APPENDIX §.§ Proof of Theorem 3 Regarding decoder (<ref>)-(<ref>), let us selectf(Q_UXZ) = I_Q(U;Z)+I_Q(X;Z|U) .We show that (<ref>) achieves the maximum of E_2(s_1(Q_U_0X_00Z), Q_Z), given by Lemma 3, and therefore, the error exponent of this decoder is as large as that of the optimal decoder. First, the threshold s_0(Q_U_0Z) can be easily simplified ass_0(Q_U_0Z)=R_y + max_{ Q_X|U_0Z∈𝒮(Q_U_0Z):   I_Q(X;Z|U_0) ≤ R_y}[ f(Q_U_0XZ) - I_Q(X;Z|U_0) ] =R_y + max_{ Q_X|U_0Z∈𝒮(Q_U_0Z):   I_Q(X;Z|U_0)≤ R_y}[ I_Q(U_0;Z)+I_Q(X;Z|U_0) - I_Q(X;Z|U_0) ]=R_y +I_Q(U_0;Z) ,such that s_1(Q_U_0X_00Z)= max{ R_y +I_Q(U_0;Z),I_Q(U_0;Z)+ I_Q(X_00;Z|U_0)}=I_Q(U_0;Z)+ max{ R_y ,I_Q(X_00;Z|U_0)}.In general, the constraint of the inner minimization problem defining E_2(s_1(Q_U_0X_00Z), Q_Z) is given byf(Q_UXZ) +[ R_y - I_Q(X;Z|U) ]_+≥ s_1(Q_U_0X_00Z),which can now be written asI_Q(U;Z)+ I_Q(X;Z|U) +[ R_y- I_Q(X;Z|U)]_+                              ≥ I_Q(U_0;Z)+ max{ R_y ,I_Q(X_00;Z|U_0)}.Substracting R_y from both sides givesI_Q(U;Z)+ I_Q(X;Z|U)-R_y+[ R_y- I_Q(X;Z|U)]_+                              ≥ I_Q(U_0;Z)+ max{ 0 ,I_Q(X_00;Z|U_0)-R_y},or, I_Q(U;Z) + [ I_Q(X;Z|U) - R_y]_+≥I_Q(U_0;Z)+[I_Q(X_00;Z|U_0)-R_y]_+.Defining D(Q_UXZ) △= I_Q(U;Z) + [ I_Q(X;Z|U) - R_y]_+, we haveE_2(s_1(Q_U_0X_00Z), Q_Z) = min_ Q_UX|Z∈𝒮(Q_Z):   D(Q_UXZ)≥ D(Q_U_0X_00Z) D(Q_UXZ)= I_Q(U_0;Z)+[I_Q(X_00;Z|U_0)-R_y]_+ ,which is the same as on the right hand side of (<ref>).AACOVER72 T. M. Cover, “Broadcast channels," IEEE Trans. on Inform. Theory,vol. 18, no. 1, pp. 2–14, January 1972.BERGMANS1 P. P. Bergmans, “Random coding theorem for broadcast channels with degraded components,"IEEE Trans. on Inform. Theory, vol. IT-19, pp. 197–-207, March 1973. BERGMANS2 P. P. Bergmans, “A simple converse for broadcast channels with additive white Gaussian noise,"IEEE Trans. on Inform. Theory, vol. IT-20, pp. 279-–280, March 1974.GALLAGER74 R. G. Gallager, “Capacity and coding for degraded broadcast channels,"Probl. Pered. Inform., vol. 10, no. 3, pp. 3–-14, July–-September 1974;translated in Probl. Inform. Transm., pp. 185-–193, July-–September 1974.KM77 J. Körner and K. Marton, “General broadcast channels with degraded message sets," IEEE Trans. on Inform. Theory, vol. IT-23, pp. 60–-64, January 1977. CK78 I. Csiszár and J. Körner, “Broadcast channels with confidential messages,"IEEE Trans. on Inform. Theory, vol. IT-24, pp. 339-–348, May 1978.ELGAMAL79 A. El Gamal, “The capacity of a class of broadcast channels,"IEEE Trans. on Inform. Theory, vol. IT-25, pp. 166-–169, March 1979.KS80 J. Körner and A. Sgarro, “Universally attainable error exponents for broadcast channels with degraded message sets," IEEE Trans. on Inform. Theory, vol. 26, no.6, pp. 670-–679, November 1980.KM11 Y. Kaspi and N. Merhav, “Error exponents for broadcast channels with degraded message sets," IEEE Trans. on Inform. Theory, vol. 57, no. 1, pp. 101–123, January 2011.MERHAV2014 N. Merhav, “Exact random coding error exponents of optimal bin index decoding," IEEE Trans. on Inform. Theory, vol. 60, no. 10, pp. 6024–6031, October 2014.CK11 I. Csiszár and J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems, Cambridge University Press, 2011.GAL68 R. G. Gallager, Information Theory and Reliable Communication, New York, Wiley 1968.SHUL03 N. Shulman, Communication over an Unknown Channel via Common Broadcasting,Ph.D. dissertation, Department of Electrical Engineering - Systems, Tel Aviv University, July 2003. http://www.eng.tau.ac.il/∼shulman/papers/Nadav_—PhD.pdfMERHAV09N. Merhav,“Statistical physics and information theory,” Foundations and Trends in Communications and Information Theory, vol. 6, nos. 1–2, pp. 1–212, 2009.
http://arxiv.org/abs/1702.08003v1
{ "authors": [ "Ran Averbuch", "Neri Merhav" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20170226085407", "title": "Exact Random Coding Exponents and Universal Decoders for the Asymmetric Broadcast Channel" }
-1cmfm MeV GeV RGI RI VWI AWI Λ_QCD MS 0.3ex<-0.75em-1.1ex∼ 0.3ex>-0.75em-1.1ex∼ sub
http://arxiv.org/abs/1702.08174v2
{ "authors": [ "Wei Sun", "Long-Cheng Gui", "Ying Chen", "Ming Gong", "Chuan Liu", "Yu-Bin Liu", "Zhaofeng Liu", "Jian-Ping Ma", "Jian-Bo Zhang" ], "categories": [ "hep-lat", "hep-ph" ], "primary_category": "hep-lat", "published": "20170227080409", "title": "Glueball spectrum from $N_f=2$ lattice QCD study on anisotropic lattices" }
equation(#2#1#3)#1todo: #1plaintheoremTheorem[section] lemma[theorem]Lemma proposition[theorem]Proposition observation[theorem]Observation claim[theorem]Claim corollary[theorem]Corollary definition definitionDefinition[section]exampleExample[section]*proofProof testexamplectrExampletestexample O testexamplectrtestexampleboxempty, title=.#1,attach boxed title to top left,minipage boxed title,boxed title style=empty,size=minimal,toprule=0pt,top=4pt,left=3mm,overlay=, coltitle=colexam,fonttitle=, before=,parbox=false,boxsep=0pt,left=3mm,right=0mm,top=2pt,breakable,pad at break=0mm,before upper=@totalleftmargin0pt, overlay unbroken=[colexam,line width=.5pt] ([xshift=-0pt]title.north west) – ([xshift=-0pt]frame.south west); ,overlay first=[colexam,line width=.5pt] ([xshift=-0pt]title.north west) – ([xshift=-0pt]frame.south west); ,overlay middle=[colexam,line width=.5pt] ([xshift=-0pt]frame.north west) – ([xshift=-0pt]frame.south west); ,overlay last=[colexam,line width=.5pt] ([xshift=-0pt]frame.north west) – ([xshift=-0pt]frame.south west); ,testexamplectrsection MyFrame linecolor=Black, outerlinewidth=.3pt, roundcorner=5pt, innertopmargin=, innerbottommargin=, innerrightmargin=5pt, innerleftmargin=5pt, backgroundcolor=RoyalBlue!0!whiteMyFrame2 linecolor=White, outerlinewidth=.3pt, roundcorner=5pt, innertopmargin=, innerbottommargin=, innerrightmargin=5pt, innerleftmargin=5pt, backgroundcolor=Gray!10!whiteEquivariance Through Parameter-Sharing [ Equivariance Through Parameter-Sharing Siamak Ravanbakhshto Jeff Schneiderto Barnabás Póczosto toSchool of Computer Science, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA, USA 15217 Siamak Ravanbakhshmravanba@cs.cmu.eduequivariance, parameter-sharing, deep learning, neural networks0.3in ] We propose to study equivariance in deep neural networks through parameter symmetries. In particular, given a groupthat acts discretely on the input and output of a standard neural network layer _: ^M→^N, we show that _ is equivariant with respect to -action iffexplains the symmetries of the network parameters . Inspired by this observation, we then propose two parameter-sharing schemes to induce the desirable symmetry on . Our procedure for tying the parameters achieves -equivariance and, under some conditions on the action of , it guarantees sensitivityto all other permutation groups outside . Given enough training data, a multi-layer perceptron would eventually learn the domain invariances in a classification task. Nevertheless, success of convolutional and recurrent networks suggests that encoding the domain symmetries through shared parameters can significantly boost the generalization of deep neural networks. The same observation can be made in deep learning for semi-supervised and unsupervised learning in structured domains. This raises an important question that is addressed in this paper: What kind of priors on input/output structure can be encoded through parameter-sharing?This work is an attempt at answering this question, when our priors are in the form discrete domain symmetries.To formalize this type of prior, a family of transformations of input and output to a neural layerare expressed as group “action” on the input and output. The resulting neural network is invariant to this action, if transformations of the input within that particular family, does not change the output (rotation-invariance). However, if the output is transformed, in a predictable way, as we transform the input, the neural layer is equivariant to the action of the group. Our goal is to show that parameter-sharing can be used to achieve equivariance to any discrete group action. Application of group theory in machine learning has been the topic of various works in the past <cit.>. In particular, many probabilistic inference techniques have been extended to graphical models with known symmetry groups <cit.>. Deep and hierarchical models have used a variety of techniques to study or obtain representations that isolate transformations from the“content” <cit.>. The simplest method of achieving equivariance is through data-augmentation <cit.>. Going beyond augmentation, several methods directly apply the group-action, in one way or another,by transforming the data or its encodings using group members <cit.>. An alternative path to invariance via harmonic analysis. In particular cascade of wavelet transforms is investigated in <cit.>. More recently <cit.> study steerable filters <cit.> as a general mean for achieving equivariance in deep networks. Invariance and equivariance through parameter-sharing is also discussed in several prior works <cit.>. The desirability of using parameter-sharing for this purpose is mainly due to its simplicity and computational efficiency. However, it also suggests possible directions for discovering domain symmetries through regularization schemes.Following the previous work on the study of symmetry in deep networks, we rely on group theory and group-actions to formulate invariances and equivariances of a function. Due to discrete nature of parameter-sharing, our treatment here is limited to permutation groups. Action of a permutation groupcan model discrete transformations of a set of variables, such as translation and 90^∘ rotation of pixels around any center in an image. If the output of a function transforms with a -action as we transform its input with a different -action, the function is equivariant with respect to action of . For example, in a convolution layer, as we translate the input,the feature-maps are also translated. If the output does not transform at all, the function is invariant to the action of . Therefore, invariance is a special equivariance. In this example, different translations correspond to the action of different members of .The novelty of this work is its focus on the “model symmetry” as a gateway to equivariance. This gives us new theoretical guarantees for a “strict” notion of equivariance in neural networks.The core idea is simple: consider a colored bipartite graphrepresenting a neural network layer. Edges of the same color represent tied parameters. This neural network layer as a function is equivariant to the actions of a given group(and nothing more) iff the action ofis the symmetry group of– there is a simple bijection between parameter symmetries and equivariences of the corresponding neural network.The problem then boils down to designing colored bipartite graphs with given symmetries, which constitutes a major part of this paper. <ref> demonstrates this idea.[Throughout this paper, since we deal with finite sets, we use circular shift and circular convolution instead of shift and convolution. The two can be made identical with zero-padding of the input.]For the necessary background on group theory see the Appendix. In the following, <ref> formalizes equivariance wrt discrete group action. <ref> relates the model symmetries a neural layer to its equivariance. <ref> then builds on this observation to introduce two procedures for parameter-sharing that achieves a desirable equivariance. Here, we also see how group and graph convolution as well as deep-sets become special instances in our parameter-sharing procedure, which provides new insight and improved design in the case of group convolution. Where input and output of the layer have a one-to-one mapping, we see that the design problem reduces a well-known problem in combinatorics.§ GROUP ACTION AND EQUIVARIANCELet = [x_1,…,x_N] ∈^Ndenote a set of variables and = {≫} be a finite group. The discrete action ofonis in the form of permutation of indices in = {1,…,N}.This group is a subgroup of the symmetric group S; the group of all N! permutations of N objects. We use = [1,…,N] to denote the ordered counterpart toand the -action on this vector ≫ [≫ 1, …, ≫ N] is a simple permutation. Using _ to denote , the discrete action of ≫∈ on ∈^N is given by ≫__≫.-action onis a permutation group that is not necessarily isomorphic toitself. ≤ captures the structure ofwhen it acts on . We use ≫ to denote the image of ≫∈ in .-action is faithful iff two groups are isomorphic ≅ – that is -action preserves its structure. In this case, each ≫∈ maps to a distinct permutation ≫≠≫' ∀≫, ≫' ∈. Given any -action onwe can efficiently obtain ; see Appendix.[style=MyFrame2] [Cyclic Group] Consider the cyclic group = ℤ_6 and define its action on ∈^3 by defining it on the index set = {1,2,3} as ≫ n ≫ + n3 ∀≫∈ℤ_6. This action is not faithful. For example, the action of ≫=1 and ≫ = 4 result in the same permutations of variables in ; single-step of circular shift to the right. With the above action, the resulting permutation groupis isomorphic to ℤ_3 < ℤ_6.Now consider the same group = ℤ_6 with a different action on : ≫ n ≫ - n3 ∀≫∈ℤ_6, where we replaced (+) with (-). Letbe the resulting permutation group. Here again ≅ℤ_3. Although isomorphic, ≠, as they are different permutation groups of . Consider the function : ^N →^M and letandbe the action ofon input/output index setsand .The joint permutation group , is a sub-direct product (or pairing) ofand ,= ⊙{(≫, ≫) |≫∈}.We are now ready to define equivariance and invariance. (·) is ,-equivariant iff ≫() = (≫) ∀∈^N, (≫, ≫) ∈,Moreover, if = {e} is trivial, we have≫() = () ∀∈^N, ≫∈and (·) is -invariant. ≫ and ≫ can also be represented using permutation matrices 𝐆∈{0,1}^N × N,and 𝐆∈{0,1}^M × M. Equivariance relation of <ref> then becomes𝐆() = (𝐆)∀∈^N, (𝐆, 𝐆) ∈, The following observation shows that the subgroup relationship affects equivariance and invariance.If the function : ^N→^M is ,-equivariant, then it is also ,-equivariant for any permutation group ,<. [style=MyFrame2] [Reverse Convolution] Consider the cyclic group = ℤ_6 and for ≫∈, define the action on = {1,2,3} to be ≫ n ≫ + n3. Also let its action on = {1,…,6} be ≫ m ≫ - n6. In other words, -action onperforms circular shift to the right and its action onshifts variables to the left.Examples of the permutation matrix representation for two members ofandare 2 = ( [ 0 1 0; 0 0 1; 1 0 0; ])2 =([ 0 0 1 0 0 0; 0 1 0 0 0 0; 1 0 0 0 0 0; 0 0 0 0 0 1; 0 0 0 0 1 0; 0 0 0 1 0 0; ]) corresponding to right and left shift on vectors of different lengths. Now consider the function : ^N→^M _() = ^𝖳 =( [ 0 a b 0 a b; a b 0 a b 0; b 0 a b 0 a; ] ) ∀ a,b ∈ Using permutation matrices one could check the equivariance condition<ref> for this function. We can show thatis equivariantto ,. Consider 2∈ℤ_6 and itsimages 2∈ and2∈. L.h.s. of <ref> is 2_() =([ 0 0 1 0 0 0; 0 1 0 0 0 0; 1 0 0 0 0 0; 0 0 0 0 0 1; 0 0 0 0 1 0; 0 0 0 1 0 0; ])([ 0 a b; a b 0; b 0 a; 0 a b; a b 0; b 0 a; ] ) = ([ b 0 a; 0 a b; a b 0; b 0 a; 0 a b; a b 0; ] )which is equal to its r.h.s. _(2) = ([ 0 a b; a b 0; b 0 a; 0 a b; a b 0; b 0 a; ] ) ([ 0 1 0; 0 0 1; 1 0 0; ] )=([ b 0 a; 0 a b; a b 0; b 0 a; 0 a b; a b 0; ] ) for any . One could verify this equality for all ≫∈ℤ_6. Now consider the group ,< ,,where = and members of= {0, 2, 4}, perform left circular shift of length 0, 2 and 4. It is easy to see that , ≅ℤ_3. Moreover since ,< ,, (·) above is ,-equivariant as well. However, one prefers to characterize the equivariance properties ofusing , rather than ,. The observation above suggests that ,-equivariance is not restrictive enough. As an extreme case, a constant function () = 1 is equivariant to any permutation group , ≤S×S. In this case equivariance ofwith respect to a particular , is not very informative to us. To remedy this, we define a more strict notion of equivariance.we say a function : ^N→^M is uniquely -equivariant iff it is -equivariant and it is “not” -equivariant for any >.§ SYMMETRY GROUPS OF A NETWORKGiven a group , and its discrete action through ,, we are interested in defining parameter-sharing schemes for a parametric class of functions that guarantees their unique ,-equivariance. We start by looking at a single neural layer and relate its unique ,-equivariance to thesymmetries of a colored multi-edged bipartite graph that defines parameter-sharing. We then show that the idea extends to multiple-layers. A colored multi-edged bipartite graph = (,, α) is a triple,whereandare its two sets of nodes, and α: ×→ 2^{1,…,C} is the edge function that assigns multiple edge-colors from the set {1,…,C} to each edge. Non-existing edges receive no color. We are interested in the symmetries of this structure. The set of permutations (, ) ∈S×S of nodes (within each part of the bipartite graph)that preserve all edge-colors define the Automorphism Group () ≤S×S – that is ∀ (n,m) ∈×(, ) ∈()⇔ α(n,m) = α( ( n,m)) Alternatively, to facilitate the notation, we define the same structure (colored multi-edged bipartite graph)as a set of binary relations betweenand– that is = (, , {_c}_1 ≤ c ≤ C) where each relation is associated with one color_c = {(n,m) | c ∈α(n,m) ∀ (n,m) ∈×}.This definition of structure, gives an alternative expression for ()(, ) ∈()⇔( (n,m) ∈_c ⇔( n,m) ∈_c )∀ c, n, m The significance of this structure is in that,it defines a parameter-sharing scheme in a neural layer, where the same edge-colors correspond to the same parameters. Consider the function [ϕ_1,…,ϕ_M]: ^N→^Mϕ_m(; , ) σ(∑_n∑_c ∈α(n,m)θ_c x_n) ∀ mwhere σ: → is a strictly monotonic nonlinearity and = [θ_1,…θ_c,…,θ_C] is the parameter-vector for this layer. The following key theorem relates the equivariances of (·; , ) to the symmetries of . [style=MyFrame]For any ∈^C s.t., w_c ≠ w_c'∀ c,c', the function (·; , ) is uniquely ()-equivariant.For any , ≤(), the function (·; , ) is ,-equivariant.The implication is that to achieve unique equivariance for a given group-action, we need to define the parameter-sharing using the structurewith symmetry group ,.[style=MyFrame2] [Reverse Convolution]Revisiting Example <ref> we can show that the condition of Theorem <ref> holds. In this case () = and the parameter-sharing of the matrixis visualized below, where we used two different line styles for a, b ∈.< g r a p h i c s >In this figure, the circular shift of variables at the output and input level to the left and right respectively, does not change the edge-colors. For example in both cases node 1's connection to nodes 3, 6 using dashed-lines is preserved.Six repetitions of this action produces different permutations corresponding tosix members of ,. Therefore , ≤(Ω) and according to Corollary <ref>, ϕ(·) is, equivariant. Moreover, using Theorem <ref> of the next section, we can prove that these six permutations are the “only” edge-color preserving ones for this structure, resulting in unique equivariance.Matrix Form. To write <ref> in a matrix form, if there are multiple edges between two nodes n,m, we need to merge them. In general, by assigning on distinct color to any set in the range of α: ×→ 2^{1,…,C} we can w.l.o.g. reduce multiple edges to a single edge. In other words we can rewriteusing ∈^M × N(; ; ) =() W_m,n = ∑_c ∈α(n,m)θ_cUsing this notation, and due to strict monotonicity of the nonlinearity σ(·), Theorem <ref> simply states that for all (≫,≫) ∈(), ∈^N andgiven by <ref>𝐆 =𝐆.[style=MyFrame2] [Permutation-Equivariant Layer] Consider all permutations of indicesand =. < g r a p h i c s > We want to define a neural layer such that all permutations of the input ≫∈ = S result in the same permutation of the output ≫ = ≫. Consider the following colored bipartite graph, for a special case where N = M = 4.It is easy to show that color-preserving permutations of this structure are () = S⊙S = { (≫, ≫) |≫∈S}≅S: On one hand, for (,) ∈S×S, having = clearly preserves the colors. On the other hand, if ≠, there exists u ∈ (also in ) such that u ≠ u. Therefore (, ) does not preserve the relation = {(n,n) | n ∈} corresponding to dashed edges, and therefore (, ) ∉(). This proves () = S⊙S. The function <ref> for thisis(;= [θ_1, θ_2], ) =(θ_1 𝐈 + θ_2 11^𝖳).<cit.> derive the same permutation equivariant layer, by proving the commutativity in <ref>, while here it follows from Corollary <ref>.Multiple Layers. For deep networks, the equivariance of the composition _2∘_1 to -action follows from that of individual layer _1:^N→^M and _2:^M→ℤ^O. Assuming _1 is ,-equivariant and _2 is ,𝕆-equivariant, where -action onis shared between the two layers, it follows that _2∘_1 is , 𝕆-equivariant, where , 𝕆 = ⊙𝕆. This is because ∀≫∈ and ∈^N_2(_1(≫)) = _2(≫_1()) = ≫𝕆_2(_1()).§ STRUCTURE DESIGNConsider the definition of neural layer <ref> that employs parameter-sharing according to . Given -action onand , we are interested in designing structuressuch that () = ,. According to the Theorem <ref>, it then follows thatis uniquely ,-equivariant. Here, we give the sufficient conditions and the design recipe to achieve this. For this we briefly review some group properties that are used in later developments. transitivity We say that -action onis transitive iff ∀ n_1,n_2 ∈, there exists at least one action ≫∈ (or ≫∈) such that ≫ n_1 = n_2. regularity The group action is free or semi-regular iff ∀ n_1,n_2 ∈, there is at most one ≫∈ such at ≫ n_1 = n_2, and the action is regular iff it is both transitive and free – for any pair n_1,n_2 ∈, there is uniquely one ≫∈ such that ≫ n_1 = n_2. Any free action is also faithful. We use a similar terminology for . That is we callsemi-regular iff ∀ n_1,n_2 ∈ at most one ≫∈ moves n_1 to n_2 andis regular if this number is exactly one.orbit The orbit ofn ∈ is all the members to which it can be moved,n = {≫ n |≫∈}. The orbits ofn ∈ form an equivalence relation[n ∼ n' ⇔∃≫ s.t.,n = ≫ n' ⇔ n ∈ n' ⇔ n' ∈ n.] This equivalence relation partitionsinto orbits = ⋃_1 ≤ p ≤ P n_p, where n_p is an arbitrary representative of the partitionn_p ⊆. Note that the -action onis always transitive on its orbits – that is for any n,n' ∈ n_p, there is at least one ≫∈ such that n = ≫ n'. Therefore, for a semi-regular -action, the action ofon the orbits n_p ∀ 1 ≤ p ≤ P is regular. [style=MyFrame2] [Mirror Symmetry] Consider = ℤ_2 = {e=0,1} (1 + 1 = 0) acting on , wherethe only non-trivial action is defined as flipping the input: 1 [1,…,N] = [N,N-1,…,1].is faithful in its action on , howeveris not transitive – N cannot be moved to N-1.If N is even, then -action is semi-regular. This is because otherwise the elementin the middle n = ⌈N/2⌉ is moved to itself by two different actions e, 1∈. Furthermore, if N is even, -action has N/2 orbits and _2 acts on these orbits regularly. If N is odd, -action has ⌈N/2⌉ orbits. However, its action on the orbit of the middle element ⌈N/2⌉ is not regular.In the following, <ref> proposes a procedure for parameter-sharing in a fully connected layer. Although simple, this design is dense and does not guarantee “unique” of equivariance.<ref> proposes an alternative design with sparse connections that in some settings ensures unique equivariance. <ref> investigates the effect of having multiple input and output channels in the neural layer and <ref> studies a special case of =, where input and output indices have a one-to-one mapping. §.§ Dense Design Consider a complete bipartite graph withandas its two parts and edges (n,m) ∈×. The action of , partitions the edges into orbits {, (n_p,m_q)}_n_p,m_q, where (n_p,m_q) is a representative edge from an orbit. Painting each orbit with a different color gives = (, , {_p,q = , (n_p,m_q)}). Therefore two edges (n,m) and (n',m') have the same color iff an action in , moves one edge to the other.[style=MyFrame],≤ forof <ref>.(·; , ), for structure <ref>, is equivariant to ,. [style=MyFrame2] [Nested Subsets and Wreath Product] The permutation-equivariant layer that we saw in Example <ref> is useful for defining neural layers for set structure. If our data-structure is in the form of nested subsets, then we require equivariance to permutation of variables within each set as well as permutation of subsets. Here, we show how to use our dense design for this purpose.We use a special indexing for the input to better identify the exchangeability of variables. We assume D subsets, each of which has d variables = [x_1,1,…, x_1,d,x_2,1,…, x_D,d].The group of our interest is the wreath product S_d ≀S_D. This type of group product can be used to build hierarchical and nested structures with different type of symmetries at each level. Nesting subsets corresponds to the most basic form of such hierarchical constructions. We use (n,n') to index input variables and (m,m') for output variables.The following figure shows the resulting parameter-sharing for an example with D=2, d=3.< g r a p h i c s >How did we arrive at this structure ? Recall Our objective is to define parameter-sharing so that ϕ_: ^dD→^dD is equivariant to the action of = S_d ≀S_D – permutations within sets at two levels. This group-action identifies three partitions of edges (seen in the figure): I) ((n,n'), (n,n')) ∀ n,n' connects each variable to its counterpart (dashed orange); II) ((n,n'), (n,m')) ∀ n, n'≠ m' connects each variable to other variables within the same subset; III) ((n,n'), (m,m')) ∀ n ≠ m is the set of edges from one subset to another. According to the Corollary <ref> this parameter-sharing guarantees equivariance.This fully-connected design is useful when the group , is large; for example when dealing with S_. However, for smaller groups it could be very inefficient in practice,as sometimes we can achieve equivariance through a sparse structure Ω. As an example, consider the 2D circular convolution layer.It is easy to show that according to this design, the convolution filter will be the same size as the input image. While this achieves the desirable equivariance, it is inefficient and does not generalize as well as a convolution layer with small filters. Moreover, the dense design does not guarantee “unique” equivariance. We next show under some conditions on , the sparse design can produce this stronger guarantee. §.§ Sparse DesignOur sparse construction uses orbits and symmetric generating sets: * Let us denote the orbits of -action onandby { n_p | 1 ≤ p ≤ P} and { m_q | 1 ≤ q ≤ Q} respectively, where P and Q are the total number of orbits and n_p,m_q are (arbitrary) representative members of orbits n_p, m_q respectively. Note that in contrast to previous section, here we are considering the orbit of variables rather than the edges.* The set A⊆ is called the generating set of(<A> =),iff every member ofcan be expressed as a combination of members of A.If the generating set is closed under inverse å∈A⇒å^-1∈A we call it a symmetric generating set. Define the structureas = (, , {_p,q,å}_1 ≤ p ≤ P,1 ≤ q ≤ Q, å∈A) _p,q,å = {( ≫å n_p, ≫ m_q) | (≫,≫) ∈,}. In words, we have one color per each combination of orbits (p, q) and members of the generating set å∈A.The following theorem relates the symmetry group of this structure to .[style=MyFrame],≤() forof <ref>. Moreover ifandare both semi-regular, then , = (). Note that this result holds for any choice of a symmetric generating set A in defining . Therefore, in designing sparse layers, one seeks a minimal A. The function (·, , ), using the structure <ref> is ,-equivariant.Ifandare semi-regular, this function is “uniquely” ,-equivariant. Now, assuming -action is semi-regular on bothand , using (arbitrarily chosen) representatives {n_p}_1≤ p ≤ P and {m_q}_1 ≤ q ≤ Q fororbits inand , we can rewrite the expression <ref> of the structured neural layer for the structure above. Here, components of = [ϕ_1,…,ϕ_M] are enumerated for 1 ≤ q ≤ Q, ≫∈:ϕ_≫ m_q(; ) = σ (∑_1 ≤ p ≤ P∑_å∈Aθ_q, p, å x_≫å n_p )where ∈^P × Q × |A| is the set of unique parameters, and each element ϕ_≫ m_q depends onsubset of parameters {θ_q, p, å}_p,å identified by q and a subset of inputs {x_å, ≫ n_p}_p,å identified by ≫. [style=MyFrame2] [Dihedral Group of <ref>] In the example of <ref>, the number of orbitsof -action onis P=2 and forthis is Q=1. The symmetric generating set is the generating set that is used in the Cayley diagram, with the addition of inverse shift (inverse of the blue arrow). We then used <ref> to build the structure of <ref> (right).[Reverse Convolution] The parameter-sharing structure of reverse convolution in Examples <ref> and  <ref> is produced using our sparse design. In these examples, bothandare regular. Therefore the proposed parameter-sharing provides unique equivariance. §.§ Multiple ChannelsIn this section, we extend our results tomultiple input and output channels. Up to this point, we considered a neural network layer : ^N→^M. Here, we want to see how to achieve ,-equivariance for : ^N × K→^M × K', where K and K' are the number of input and output channels.First, we extend the action ofonand to ^K = [, …, _K times] as well as ^K', to accommodate multiple channels. For this, simply repeat the -action on each component. -action on multiple input channels is equivalent to sub-direct product ⊙…⊙_K times≅. The same applies to .This repetition, multiplies the orbits of , one for each channel, so that instead of having P and Q orbits on the inputand outputsets, we have K × P and K' × Q orbits on the input ^K and output ^K'. This increases the number of parameters by a factor of K × K'.The important implication is that, orbits and multiple channels are treated identically by both dense and sparse designs. [style=MyFrame2] [Group Convolution] The idea of group-convolution is studied by <cit.>; see also <cit.>. The following claim relates the function of this type of layer to our sparse design.Under the following conditions the neural layer <ref> using our sparse design<ref> performs group convolution: I) there is a bijection between the output and(=) and; II)is transitive.This also identifies the limitations of group-convolution even in the setting where =: Whenis semi-regular and not transitive (P > 1), group convolution is not guaranteed to be uniquely equivariant while the sparse parameter-sharing of <ref> provides this guarantee.For demonstration consider the following example in equivariance to mirror symmetry.< g r a p h i c s >This figure shows the bipartite structure for = ℤ_2 = {0,1} and A = {1}. -action is horizontal flip of the input and the output. On the right, = while on the left -action has two orbits. Orbits are identified by line-style and color of the circles. In a neural layer with this parameter-sharing, when we flip the input variables (around the mirror line) the output is also flipped.The representatives in each orbit onandis identified with a star. Note that each combination of orbits p and q has a parameter of its own, identified with different edge-styles. While this construction guarantees “unique” -equivariance, if instead we use the same parameters across orbits (as suggested by the original group convolution) we get the parameter-sharing of the figure below middle.< g r a p h i c s >In this case, the resulting neural layer has the desired equivariance (right). However, it is equivariant to the action of a larger group , ≅ℤ_2 ×ℤ_2 > ℤ_2, in which 1 in the second ℤ_2 group exchanges variables across the orbits on(left in figure above). §.§ =In semi-supervised and un-supervised applications, we often need to produce a single output y_nfor each input x_n ∀ n ∈ – that is =. We can ensure this by having a relation _c^* = {(n,n) | n ∈} inthat guarantees any (, ) ∈() applies the same permutation toand= – =. The resulting structure =(,, {_c}_1 ≤ c ≤ C∪{_c^*}} can be also interpreted as acolored multi-edged directed graph (digraph). This is because we can collapse the two parts by identifying n ∈ with n ∈.Therefore, the symmetry-group of the original bipartite structure, is isomorphic to symmetry group of a colored multi-edged digraph on .Achieving unique -equivariance then reduces to answering the following question: when could we express a permutation group ≤S as the symmetry group () of a colored multi-edged digraph with N nodes? This problem is well-studied under the class of concrete representation problems <cit.>. Permutation groupsthat can be expressed in this way are called 2-closed groups <cit.>.The recipe for achieving ≤() is similar to our dense construction of <ref>[In a fully connected digraph, the edges that belong to the same orbit by -action on ×, receive the same color.] The 2-closureof a groupis then, the greatest permutation group ≤S with the same orbit on × as . It is known that for example semi-regular permutation groups are 2-closed =. This result also follows a corollary of our Theorem <ref> for sparse design of <ref>. [style=MyFrame2] [Equivariance to × 90^∘ Rotations] Figure below compares the digraph representation ofproduced using (left) our sparse design, and (right) our dense design.< g r a p h i c s > Multiples of ± 90^∘ rotation is produced as the action of cyclic group ℤ_4 on eight input output variables – that is == {1,…,8}. ℤ_4-action is semi-regular with two orbits; these orbits the two inner and outer set of four nodes. The representatives ofeach orbit in our sparse design is indicated using filled circles. The generating set consists of A = {1,3}, rotation by 90^∘ and its inverse, rotation by 270^∘. Each edge in each of these figures, has a corresponding edge in the opposite direction, within a different relation. To avoid over-crowding the figure, we have dropped this edge from the drawing above, unless both edges belong to the same relation. [style=MyFrame2] [Graph Convolution]Consider the setting where we use the (normalized) adjacency matrix 𝐁∈{0,1}^N× N (or Laplacian) of a graph Λ, to identify parameter-sharing in a neural network layer. For a single input/output channel, this is often in the form of 𝐀, where ∈^N and 𝐀 = θ_1 𝐁 + θ_2 𝐈 has different parameters for diagonal and off-diagonal values <cit.>; for multiple channels see <ref>. The following corollary of Theorem <ref> identifies the equivariance of 𝐀.Given the digraph Λ and its binary adjacency matrix 𝐁∈{0,1}^N× N,then (θ_1 𝐁 + θ_2 𝐈) isuniquely equivariant to the symmetry-group of Λ.Since two graphs on N nodes can have identical symmetries, one implication of this corollary is that graph-convolution has identical equivariances for graphs with the same symmetry groups.§ CONCLUSIONThis work is a step towards designing neural network layers with a given equivariance and invariance properties. Our approach was to relate the equivariance properties of the neural layer to the symmetries of the parameter-matrix.We then proposed two parameter-sharing scheme that achieves equivariance wrtany discrete group-action. Moreover under some conditions, we guarantee sensitivity wrt other group actions. This is important because even a trivial constant function is invariant to all transformations. It is therefore essential to be able to draw the line between equivariance/invariance and sensitivity in a function.To our knowledge, our work presents the firstresults of its kind on guarantees regarding both variance and equivariance with respect to group actions. § ACKNOWLEDGMENTThis research is supported in part by DOE grant DESC0011114 and NSF grant IIS1563887. icml2017§ PROOFSof Observation <ref>≫() = (≫) ∀≫∈⇒≫() = (≫) ∀≫∈⊂. of Theorem <ref>For unique ()-equivariance we need proofs in two directions. First we show that(, ) ∈() ⇒(; , ) = ^-1(; , )which in turn shows that (; , ) = (; , ). Starting from ^-1(; , ) on the r.h.s. of <ref> and considering an index m in = [ϕ_1, …, ϕ_M] we haveϕ_^-1 m(; , )= σ(∑_n ∈, c ∈α(n, ^-1 m)θ_c x_ n) = σ (∑_n ∈, c ∈α(^-1 n, ^-1 m)θ_c x_n) =σ(∑_n ∈, c ∈α(n, m)θ_c x_n) = ϕ_m(; , )where in arriving at <ref> we used the fact that (,) ∈()⇒ α(n,m) = α( (^-1 n, ^-1 m)).In the opposite direction we need to show that (; , ) = (^-1; , ) ∀∈^N, ∈^C only if (, ) ∈().(; , ) = (^-1; , )∀∈^N, ∈^C⇒ϕ_m(; , ) = ϕ_ m(^-1; , )∀ m, ∈^N, ∈^C⇒∀ m, ∈^N, ∈^C∑_n ∈, c ∈α(n, m)θ_c x_n = ∑_n ∈, c ∈α( n,m)θ_cx_^-1 n⇒∑_n ∈, c ∈α(n, m)θ_c x_n = ∑_n ∈, c ∈α( n,m)θ_cx_nwhere <ref> follows from monotonicity of σ: →. We need to show that this final equality ∀ m, ∈^N, ∈^C implies that α( n,m) = α(n, m), which in turn, according to <ref> means (,)∈().We prove α( n,m) = α(n, m) by contradiction: assume α( n^*,m^*) ≠α(n^*, m^*) for some n^*,m^*.Since α( n^*,m^*) ≠α(n^*, m^*), we can w.l.o.g. assume ∃ c^* ∈α(n^*,m^*) s.t.c^* ∉α( n^*,m^*) (the reverse direction, where c^* ∈α( n^*,m^*) ∧ c^* ∉α(n^*,m^*) is similar). We show that an assignment of ∈^N and θ∈^C contradicts <ref>. For this, definesuch that x_n = δ(n, n^*), is non-zero only at index n^*. Moreover, assigning θ_c = δ(c, c^*) the r.h.s. of <ref> is ∑_n ∈, c ∈α( n,m^*)θ_cx_n = 0 while the l.h.s. is ∑_n ∈, c ∈α(m, n)θ_c x_n = θ_c^* x_n^*≠ 0. Therefore α( n,m) = α(n, m)∀ n,m, which by definition of () means (, ) ∈(). of Proposition <ref>To prove ,≤() we simply show that all (≫,≫) ∈, preserve the relations in ().From <ref>, ≫, = (≫,≫) ∈() ⇐( (n,m) ∈_p,q ⇔(≫n,≫m) ∈_p,q)∀ (p,q), n, mThe r.h.s holds for all (≫,≫) ∈, because in constructing relations _p,q in the dense design, we used edge-orbits:(n,m) ∈_p,q⇔ (≫ n, ≫ m) ∈_p,q∀ (p,q), n, m.Therefore ≫,∈,⇒≫,∈().of Theorem <ref>We first show that any permutation (≫,≫) ∈, is also in (). The major part of the proof is to show that whenandare semi-regular, then |()| ≤ |,|. Combination of these two proves ()= ,.I) to prove that (,) ∈,⇒ (,) ∈(), we simply apply (,) to an arbitrary edge (m,n) in a relation of . According to <ref>_p,q, a = { (≫a n_p, ≫ m_q) | (≫,≫) ∈, }.Application of (,) to (≫a n_p, ≫ m_q) gives (≫a n_p, ≫ m_q) = (≫'a n_p, ≫' m_q) ∈_p,q,a. From <ref>, it follows that ,≤().II) For this part, we use the orbit-stabilizer theorem. The orbit of each pair (n,m) ∈_p,q,a wrt , is defined as , (n,m) = { ( n, m) |,∈,}. The stabilizer ,^(n,m) of (n,m) ∈_p,q,a is ,^(n,m) = {, ∈, |,(n,m) = (n,m)}, the group of all actions that fix (n,m).The orbit-stabilizer theorem states that |,| = |, ^(n,m)| × |, (n,m)|. In our argument, we apply this theorem to bound |()| using |()^(n,m)| and |() (n,m)|. The orbit-size, |() (n,m)|, for a pair (n,m) is bounded by the size of its relation |_p,q,a|,for some p,q,a. This is because, according to <ref>, ∈() ⇒ ((n,m) ∈_p,q,a⇒(n,m) ∈_p,q,a).From <ref>, |_p,q,a| = |,|, and therefore |() (n,m)| < |, |.Now, it only remains to show that ifandare regular orbits (or semi-regular),the stabilizer is trivial ()^(n,m) = {e}. Because in this case the size of () is bounded by the size of orbit |()| = |()(n,m)| ≤ |, |, which combined with the result of part (I) gives , = ().Since, according to our assumption,acts regularly on n_p∀ p, going back to definition of _p,q,å = {(≫å n_p, ≫ m_q) |≫, ∈, }, this (see definition of regularity) implies that for each n ∈ n_p, å∈A and m_q, we can identify a single ≫'∈ such that for some (n, m) = (å≫' n_p, ≫' m_q) ∈_p,q,å. This means that the edges (or pairs) adjacent to each node n ∈ n_p all have distinct colors. The same argument using regularity of -action on m_q∀ q shows that edges (or pairs) adjacent to m ∈ m_q all have distinct colors.Therefore if we fix a pair (m,n), all their neighboring edges (adjacent on n or m) are unambiguously fixed. The same goes for the neighbors of the newly fixed nodes and so on. If we can show that the bipartite graph representingis connected then fixing a pair guarantees that all pairs in all relations ofare fixed and therefore (n,m) has a trivial stabilizer.Two properties guarantee the connectedness of : * Since A = A^-1 is a generating set of , the bipartite subset consisting of subset of nodes n_p and m_q are connected. To show this, it is enough to show that we can reach any node n_z starting from an arbitrary representative n_p and zigzagging through the bipartite structure. Since n_z,n_p ∈ n_p ⇒∃≫_z ∈ s.t.n_z = ≫_z n_p. Since <A> = ,, we can write ≫_z = å_1 …å_L. The path that starts from n_p and takes the connections corresponding to _p,q,å_L, _p,q,å^-1_L-1, _p,q,å_L-2,…, _p,q,å^-1_1 takes us through a zigzag path from n_p to n_z. * Since we have a relation _p,q, å for all pairs p,q, all the induced bipartite subgraphs on n_p-m_q are connected.This proves that the whole bipartite graph is connected and unambiguously fixed if we fix any pair (n,m). Therefore, (n,m) has a trivial stabilizer, proving that () = ,.of Corollary <ref> Follows directly from Theorems <ref> and <ref>. of Claim <ref>To see this, note thatacts on = regularly, with the natural (group) action ≫. Set the representative from the resulting single orbit as m_q = e. Then <ref> becomes = [ϕ_≫]_≫∈ with componentsϕ_≫(; ) = σ (∑_1 ≤ p ≤ P∑_å∈Aθ_å, p x_≫å n_p )If we further tie the parameters across the orbits so that θ_å, p = θ_å, p'∀ p,p', the <ref> above is equivalent to formulation of <cit.> for a single input/output channels (see <ref> for multiple channels). of Corollary <ref>First we show this assuming a single channel K=1. For multiple channels see <ref>.Consider the bipartite structure constructed from Λ: = (, , {{(n,n) | n ∈}, {(n,n') | (n,n') ∈(Λ)}}). Applying the result of Theorem <ref> using () = tells us that the function 𝐀 is uniquely ()-equivariant – that is (𝐁_·,k) = 𝐁 (_·,k) ∀π∈(). Because of the relation _c^* = {{(n,n) | n ∈} in , the same bipartite structure , can be interpreted as a digraph; here with a single color, sincehas only one relation in addition to _c^*. Since this relation defines Λ,() = (Λ), which means𝐁 is uniquely (Λ)-equivariant. § BACKGROUND ON PERMUTATION GROUPSLet = [x_1,…,x_N] ∈^N be a vector of N variables taking value in the same domain . A groupis a set, equipped with a binary operation, with the following properties: I)is closed under its binary operation; II) the group operation is associative –(≫_1 ≫_2)≫_3 = ≫_1 (≫_2≫_3) ∀ ≫_1,≫_2,≫_3 ∈; III) there exists an identity e∈ such that ≫e = e≫ = ≫ and ; IV) every element ≫∈ has an inverse ≫^-1∈, such that ≫≫^-1 = ≫^-1≫ = e. A subset ⊆ is a subgroup of(≤) iffequipped with the binary operation offorms a group. Moreover, ifis a proper subset of ,is a proper subgroup of , <. Two groups are isomorphic ≅ if there exists a bijection β: →, such that ≫_1 ≫_2 = ≫_3 ⇔β(≫_1) β(≫_2) = β(≫_3) ∀≫_1,≫_2, ≫_3. If this last relation holds for a surjective mapping (not necessarily one-to-one) then β is a homomorphic mapping andis isomorphic to a subgroup of . Cayley Diagram.The set A⊆ is called the generating set of(<A> =),iff every member ofcan be expressed as a combination of members of A.If the generating set is closed under inverse å∈A⇒å^-1∈A we call it a symmetric generating set.A is the minimal generating set if it has the least number of members among the generating sets of . Note that the minimal generating sets are generally not unique. The size of the minimal generating set of a groupbecomes important because,the number of parameters in our parameter-sharing scheme grows linearly with |A|. A groupis often visualized by its Cayley diagram; a colored digraph in which the node-set isanddirected edge (≫, å≫) ∀≫∈, å∈A is colored by å∈A. <ref>(lower-left) shows the Cayley diagram of = D_5. §.§ Discrete Group ActionWe are interested on the way a group “acts” on the input and output of a deep network. Function γ: ×→ is the left action of grouponiff I) γ(e, ) = and; II) γ(g_1, γ(g_2, )) = γ(g_1 g_2, ).[All the following definitions and results may be extended to the “right” group action by substituting ≫↔≫^-1∀≫∈.] For our purpose we limit this action to actions on the indices = of = [x_n] – function γ: ×→ satisfies γ(e, n) = n and γ(g_1, γ(g_2, n)) = γ(g_1g_2, n). We often use ≫ n as a shorthand for γ(≫, n), and also use ≫ to denote {≫ n | n ∈}. The action of ≫ on a vector/sequence = is defined similarly ≫ [≫ 1,…, ≫ N]. Considering this, the -action on = [x_1,…,x_N] is ≫ [x_≫ 1, … x_≫ N].From the properties of group and its action it follows that γ(≫, ·): → is a bijection with γ^-1(≫, n) = γ(≫^-1, n)∀ n ∈, ≫∈. Sinceis a finite set, this bijection for each ≫∈ is a permutation of – ≫ [γ(≫, 1), …, γ(≫, N)] is a permutation of .Let = {γ(≫, ·) |≫∈} with (function composition as the binary group operation)denote the group of permutations ofinduced by ≫∈. This group is a subgroup of the symmetric group S; the group of all N! permutations of .captures the structure ofwhen it acts on the setand it is indeed a homomorphic image of . We use ≫ to denoteγ(≫, ·), the the image of ≫∈ in .§.§.§ Properties of Group Action-action is faithful iff two groups are isomorphic ≅. In this case all actions of ≫∈ are distinct permutations – that is ≫≠≫' ∀≫, ≫' ∈. Given any -action onwe can obtain its faithful subgroup that is isomorphic to . The importance of faithfulness of -action is because it preserves the structure of , and if an action is not faithful, we might as well focus on -action.Given any unfaithful-action γ: ×→, let K_γ be the normal subgroup ofthat corresponds to identity permutation –K_γ = {≫∈ | γ(g, n) = n ∀ n ∈}. One obtains the groupthat acts faithfully onas the quotient group =/ K_γ.We now define some group properties that are important in guaranteeing the “strict” equivariance with respect to -action. -action onis transitive iff ∀ n_1,n_2 ∈, there exists at least one action ≫∈ such that ≫ n_1 = n_2.The group action is free or semi-regular iff ∀ n_1,n_2 ∈, there is at most one ≫∈ such at ≫ n_1 = n_2, and the action is regular iff it is both transitive and free – for any pair n_1,n_2 ∈, there is uniquely one ≫∈ such that ≫ n_1 = n_2. Any free action is also faithful. §.§.§ Orbits Given -action on , the orbit ofn ∈ is all the members to which it can be moved,n = {≫ n | n ∈}. The orbits ofn ∈ form an equivalence relation, where n ∼ n' ⇔∃ gs.t.,n = ≫ n' ⇔ n ∈ n' ⇔ n' ∈ n. This equivalence relation partitionsinto orbits = ⋃_1 ≤ p ≤ P n_p, where n_p is an arbitrary representative of the partitionn_p ⊆. Note that the -action onis always transitive on its orbits – that is for any n,n' ∈ n_p, there is at least one ≫∈ such that n = ≫ n'. Therefore, for a semi-regular -action, the action ofon the orbits n_p ∀ 1 ≤ p ≤ P is regular. As we see the number of distinct parameters in our parameter-sharing scheme grows with the number of orbits. Cycle Notation. To explicitly show the action of ≫∈ on the set , we sometimes use the cycle notation of a permutation. Any permutation ∈S is decomposable to product of disjoint cycles.A cycle of length d, (b_1,…,b_d) sends b_i → b_i+1d. Here b_i ∈ and a cycle acts on a subset of .For example, the action of (1,3,2) on [1,…,6] is[3,1,2,4,5,6]. We can write the permutation ≫ where ≫ [1,…,6]= [3,1,2,5,4,6] as the product of disjoint cycles{(1,3,2),(6),(4,5)} = {(1,3,2),(4,5)}.
http://arxiv.org/abs/1702.08389v2
{ "authors": [ "Siamak Ravanbakhsh", "Jeff Schneider", "Barnabas Poczos" ], "categories": [ "stat.ML", "cs.NE" ], "primary_category": "stat.ML", "published": "20170227172229", "title": "Equivariance Through Parameter-Sharing" }
Upper bounds on the smallest size of a saturating set in projective planes and spaces of even dimensionThe research ofD. Bartoli, M. Giulietti, S. Marcugini, and F. Pambianco wassupported in part by Ministry for Education, University and Research of Italy (MIUR) (Project “Geometrie di Galois e strutture di incidenza”)and by the Italian National Group for Algebraic and Geometric Structures and their Applications(GNSAGA - INDAM).The research of A.A. Davydov was carried out at the IITP RAS at the expense of the Russian Foundation for Sciences (project 14-50-00150). S. Capozziello^1,2,3, G. Lambiase^4,5, and E.N. Saridakis^6,7==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Daniele BartoliDipartimento di Matematicae Informatica,Università degli Studi di PerugiaPerugia, 06123, ItalyE-mail address: daniele.bartoli@unipg.itAlexander A. DavydovInstitute for Information Transmission Problems (Kharkevich institute) Russian Academy ofSciencesGSP-4, Moscow, 127994, Russian Federation E-mail address: adav@iitp.ru Massimo Giulietti, Stefano Marcugini, Fernanda Pambianco Dipartimento di Matematicae Informatica, Università degli Studi di Perugia Perugia, 06123, Italy E-mail address: massimo.giulietti, stefano.marcugini, fernanda.pambianco@unipg.itIn a projective plane Π _q (not necessarily Desarguesian) of order q, a point subsetis saturating (or dense) if any point of Π _q∖ is collinear with two points in. Modifying an approach of <cit.>, we proved the following upper bound on the smallest size s(2,q) of a saturating set in Π _q:s(2,q)≤√((q+1)(3ln q+lnln q +ln3/4))+√(q/3ln q)+3.The bound holds for all q, not necessarily large.By using inductive constructions,upper bounds on the smallest size of a saturating set in the projective space (N,q) with even dimension N are obtained.All the results are also stated in terms of linear covering codes. § INTRODUCTION We denote by Π _q a projective plane (not necessarily Desarguesian) of order q and by (2,q) the projective plane over the Galois field with q elements.A point set ⊂Π _q is saturating if any point of Π _q∖ is collinear with two points in . Saturating sets are considered, for example, in <cit.>; see also the references therein. It should be noted that saturating sets are also called “saturated sets” <cit.>, “spanning sets” <cit.>, “dense sets” <cit.>, and “1-saturating sets” <cit.>.A particular kind of saturating sets in a projective plane is complete arcs.An arc is a set of points no three of which are collinear. An arc is said to be complete if it cannot be extended to a large arc; see <cit.> and the references therein.The homogeneous coordinates of the points of a saturating set of size k in PG(2,q) form a parity check matrix of a q-ary linear code with length k, codimension 3, and covering radius 2. For an introduction to covering codes see <cit.>. An online bibliography on covering codes is given in <cit.>.The main problem in this context is to find small saturating sets (i.e. short covering codes).Denote by s(2,q)the smallest size of a saturating set in Π _q.Let s_D(2,q) be the smallest size of a saturating set in the Desarguesian plane (2,q).Let t_2(2,q) be the smallest size of a complete arc in(2,q).Clearly,s_D(2,q)≤ t_2(2,q). The trivial lower bound iss(2,q),s_D(2,q),t_2(2,q)>√(2q)+1. Saturating sets in (2,q) obtained by algebraic constructions or computer search can be found in <cit.>.For(2,q) with q non-prime, in the literature there are a few algebraic constructions of relatively small saturating sets providing, for instance, the following upper bounds:s_D(2,q)<3√(q)-1ifq=(q')^2 ;s_D(2,q)<2√(q)+2√(q)+2ifq=(q')^4 ;s_D(2,q)<2√(q)+2√(q)+2√(q)+2ifq=(q')^6, q' , q'≤73 ;s_D(2,q)<2√(q^m-1)+√(q)ifq=(q')^m, m≥2 <cit.>. Saturating sets of size approximately Cq^3/4, with C a constant independent on q, have been explicitly described in several papers; see <cit.>.In <cit.>, algebraic constructions of saturating sets in (2,q) of size about 3q^2/3 are proposed and the following bounds are obtained (here p is prime):s_D(2,q)<2q/p^ t+(p^ t-1)^2/p-1+1if q=p^m, m≥ 2t;s_D(2,q)<2/p√((qp)^2)+√((qp)^2)-2√(qp)+1/p-1 +1if q=p^3t-1;s_D(2,q)<min_v=1,… ,2t+1Φ(t,p,v)if q=p^2t+1, where Φ(t,p,v)={ (v+1)p^t+1+(p^t-1)^2v/ (p-1)^v(p^2t+1-1)^(v-1)+2}.For many triples (t,p,v), constructions of (<ref>) provide relatively small saturating sets, see <cit.>.In <cit.>, by computer search in a wide region of q, the following upper bounds for the smallest sizes of complete arcs in (2,q) are obtained:s_D(2,q)≤ t_2(2,q)<0.998√(3qln q)7≤ q≤ 160001;s_D(2,q)≤ t_2(2,q)<1.05√(3qln q)160001< q≤ 301813.For q≤160001 greedy algorithms are used while for 160001< q≤ 301813 the algorithm with fixed order of points (FOP) is applied.In <cit.>, for (2,q)an iterative step-by-step construction of complete arcs, which adds a new point in each step, is considered. As an example, it is noted the step-by-step greedy algorithm that in every step adds to the arc a point providing the maximal possible (for the given step) number of new covered points. For more than half of steps of the iterative process, an estimate for the number of new covered points in every step is proved. A natural (and well-founded) conjecture is made that the estimate holds for the other steps too. Under this conjecture, the following upper bound on the smallest sizeof a complete arc in(2,q) is obtained.conjectural bound:s_D(2,q)≤ t_2(2,q)<√(q)√(3ln q+lnln q+ln 3)+√(q/3ln q)+3.Note also that in <cit.> a truncated iterative step-by-step process is considered. The process stops whenthe number of uncovered points attains some (a priori arbitrary assigned) value. Then this value is summarized with the number of steps, executed before stopping of the iterative process. The estimate (<ref>) is obtained when the value, a priori assigned to stop the process, is √(q/3ln q); it implies that the number of the steps, executed before stopping of the step-by-step process, is √(q)√(3ln q+lnln q+ln 3).Surveys and results of probabilistic constructions for geometrical objects can be found in <cit.>; see also the references therein.In <cit.>, by using a modified probabilistic approach introduced in <cit.>, the following upper bound for an arbitrary (not necessarily Desarguesian) plane is proved:s(2,q)<3√(2)√(qln q)<5√(qln q).In <cit.>, see also <cit.>, by probabilistic methods different from these in <cit.> the upper bounds(2,q)≤ 2√((q+1)ln (q+1))+2 2√(qln q)is obtained.In <cit.>, Z. Nagy obtained the following bounds(2,q)≤ (√(3)+o(1))√(qln q)).The proof of (<ref>) is given in <cit.> by two approaches: probabilistic and algorithmic. In the both approaches, starting with some stage of the proof, it is assumed (by the context) that q is large enough.The algorithmic approach in <cit.> considers an originalstep-by-step greedy algorithm and obtains estimates for the number of new covered points in every step of the algorithm. In order to obtain the bound, the iterative process stops after executing of ⌈√(3qln q) ⌉ steps. It is proved in <cit.>, that in this case the number of uncovered points is not greater than √(q). Then the half of the number of uncovered points is summarized with the number of executed steps. As the result of the algorithmic proof of <cit.>, the following form of the bound can be derived.s(2,q)≤⌈√(3qln q) ⌉+⌈1/2√(q)⌉≤√(3qln q)+1/2√(q)+2, q large enough. In some sense the algorithmic approach of <cit.> is close to consideration of bounds in <cit.>.But in <cit.> the number of steps, executed before stopping of the iterative process, depends on a priori assigned number of uncovered points. At the same time, in <cit.> the iterative process always stops after executing of ⌈√(3qln q) ⌉ steps. Of course, it must be noted that in <cit.> the bound is conjectural (as the estimates are not proved for all steps of the iterative greedy process)whereas in <cit.> the bound is proved. Note also that problems considered in <cit.>and <cit.> are close but not the same (small complete arcs in <cit.>and small saturating sets in <cit.>).In this paper, we modify the algorithmic approach of <cit.> so that the final formula holds for an arbitrary q (not necessarily large) and, moreover, the value of a new bound is smallerthan in (<ref>), see (<ref>)–(<ref>).Our main results is Theorem <ref>. For the smallest size s(2,q) of a saturating set in a projective plane (not necessarily Desarguesian) of order q (not necessarily large) the following upper bound holds:s(2,q)≤√((q+1)(3ln q+lnln q +ln3/4))+√(q/3ln q)+3.Note that modifying the algorithmic approach of <cit.>, we (similarly to <cit.>) stop the iterative process whenthe number of uncovered points attains a priori assigned value, ξ say. If ξ=1 we obtain the bound coinciding with (<ref>); if ξ=√(q) we obtain the bound coinciding with (<ref>), see Remark <ref>. Finally, if ξ=√(4q/3ln q) we get the bound(<ref>).It is interesting that the main term √(3qln q) is the same in the bounds (<ref>), (<ref>) for complete arcs and (<ref>)–(<ref>), (<ref>) for saturating sets. Theorem <ref> can be expressed in terms of covering codes.The length function ℓ (R,r,q) denotes the smallest length of a q-ary linear code with covering radius R and codimension r; see <cit.>.Theorem <ref> can be read as follows. The following upper bound on the length function holds.ℓ (2,3,q)≤√((q+1)(3ln q+lnln q +ln3/4))+√(q/3ln q)+3.Let (N,q) be the N-dimensional projective space over the Galois field of q elements. A point set ⊂(N,q) is saturating if any point of (N,q)∖ is collinear with two points in . A particular kind of saturating sets in a projective space is complete caps.A cap is a set of points no three of which are collinear. A cap is said to be complete if it cannot be extended to a large cap.Let [n,n-r]_qR be a linear q-ary code of length n, codimension r, and covering radiusR. The homogeneous coordinates of the points of a saturating set with size n in (r-1,q), form a parity check matrix of an [n,n-r]_q2 code. Results on saturating sets in (N,q) and the corresponding covering codes can be foundin <cit.> and the references therein.Let s(N,q) be the smallest size of a saturating set in (N,q), N≥ 3.In terms of covering codes, we recall the equalitys(N,q)=ℓ (2,N+1,q). The trivial lower bound for s(N,q) iss(N,q)>√(2)q^N-1/2.Constructions of saturating sets (or the corresponding covering codes) whose size is close to this lower bound are only known for N odd, see <cit.> for survey. In particular, in <cit.>, see also <cit.>, the following bound is obtained by algebraic constructions:s(N,q)=ℓ (2,N+1,q)≤ 2q^N-1/2+q^N-3/2, N=2t-1≥3, N≠7,11, q≥7, q≠9,where t=2,3,5, and t≥7.From (<ref>), by using inductive constructions from <cit.>, we obtained upper bounds on the smallest size of a saturating set in the N-dimensional projective space (N,q) with N even; see Section <ref>. In many cases these bounds are better than the known ones.The paper is organized as follows. In Section <ref>, we deal with upper bounds on the smallest size of a saturating set in a projective plane.In Section <ref>, bounds for saturatingsets in the projective space (N,q) are obtained.§ A MODIFICATION OF NAGY'S APPROACH FOR UPPER BOUND ON THE SMALLEST SIZE OF A SATURATING SET IN A PROJECTIVE PLANE Assume that in Π_q a saturating setis constructed by a step-by-step algorithm adding one new point to the set in every step.Let i>0 be an integer. Denote by _i the running set obtained after the i-th step of the algorithm. A point P of Π_q∖_i is covered by _i if P lies on t-secant of _i with t≥2.Let _i be the subset ofΠ_q∖_i consisting of points not covered by _i. In <cit.> the following ingenious greedy algorithm is proposed. One takes the line ℓ skew to _i such that the cardinality of intersection |_i∩ℓ| is the minimal among all skew lines. Then one adds to _i the point on ℓ providing the greatest number of new covered points (in comparison with other points of ℓ). As a result we obtain the set _i+1 and the corresponding set _i+1.The following Proposition is proved in <cit.>. <cit.> It holds that|_i+1|≤|_i|·(1-i(q-1)/q(q+1)).Clearly, that always_2=q^2. Iteratively applying the relation (<ref>) to _2=q^2, we obtain for some k the following:|_k+1|≤ q^2∏_i=2^k(1-i(q-1)/q(q+1)). We denotef_q(k)=∏_i=2^k(1-i(q-1)/q(q+1)). Similarly to <cit.>, we consider a truncated iterative process. We will stop the iterative process when |_k+1|≤ξ where ξ≥ 1 is some value that we may assign arbitrary to improve estimates.By <cit.> after the end of the iterative process we can add at most ⌈ |_k+1|/2⌉ points to the running subset _k+1 in order to get the final saturating set .The size s of the obtained setiss≤ k+1+⌈ξ/2⌉ under condition q^2f_q(k)≤ξ. Using the inequality 1-x≤ e^-x we obtain thatf_q(k)<e^-∑_i=2^ki(q-1)/q^2+q=e^-(k^2+k-2)(q-1)/ 2(q^2+q),which impliesf_q(k) <e^-(k^2+k-2)(q-1)/2(q^2+q)<e^-k^2/2q+2,provided that(k^2+k-2)(q-1)/q>k^2or, equivalently,k^2/k-2<q-1, k<q-4.Let ξ≥1 be a fixed value independent of k. The valuek≥⌈√(2(q+1))√(lnq^2/ξ)⌉satisfiesinequality q^2f_q(k)≤ξ.By (<ref>), to provide q^2f_q(k)≤ξ it is sufficient to find k such thate^-k^2/2q+2< ξ/q^2.In a plane Π_q it holds thats(2,q)≤√(2(q+1))√(ln q^2/ξ)+ξ/2+3,  ξ≥ 1,where ξ is an arbitrarily chosen value.The assertion follows from (<ref>) and (<ref>).We consider the function of ξ of the formϕ(ξ)=√(2(q+1))√(ln q^2/ξ)+ξ/2+3.Its derivative by ξ isϕ'(ξ)=1/2-1/ξ√(q+1/2lnq^2/ξ).Put ϕ'(ξ)=0. Then it is easy to see thatξ^2=q+1/ln q-1/2lnξ.We find ξ in the form ξ=√(q+1/cln q). By (<ref>),c=1-ln(q+1)/4ln q+ln c+lnln q/4ln q.For simplicity, we choose c≈3/4 and putξ =√(4q/3ln q). Now, substituting ξ =√(4q/3ln q) in (<ref>), we obtain Theorem <ref>.(i) Let ξ=1. From (<ref>) we haves(2,q)≤2√((q+1)ln q)+3,that practically coincides withbound (<ref>) from <cit.>.(ii)Let ξ=√(q). From (<ref>) we obtain the estimates(2,q)≤√(3(q+1)ln q)+1/2√(q)+3 whichpractically coincides with Nagy's bound (<ref>). However, as it is noted below, the value ξ =√(4q/3ln q) gives a better estimate than (<ref>).We denote the differenceΔ(q)=√(3qln q)+1/2√(q)+2-(√((q+1)(3ln q+lnln q +ln3/4))+√(q/3ln q)+3).It can be shown (e.g. by consideration of the corresponding derivations) thatΔ(q)>0  for  q≥919,and, moreover, Δ(q) and Δ(q)/√(q) are increasing functions of q. For illustration, see Fig. <ref> where the top dashed-dotted black curve shows Δ(q) while the bottom solid red curve √(q/7) is given for comparison. Note also thatΔ(q)/√(q)≈√(3ln q)+1/2-√(3ln q+lnln q)-1/√(3ln q) ,Δ(q)/√(qln q)≈√(3)+1/2√(ln q)-√(3+lnln q/ln q)-1/√(3)ln q ,whencelim_q→∞Δ(q)/√(q)=1/2, lim_q→∞Δ(q)/√(qln q)=0. § UPPER BOUNDS ON THE SMALLEST SIZE OF A SATURATING SET IN THE PROJECTIVE SPACE (N,Q), N EVEN In further we use the results of <cit.> that give the following inductive construction. <cit.> <cit.> Letexist an [n_q,n_q-3]_q2 code with n_q<q.Then, under condition q+1≥ 2n_q, there is an infinite family of [n,n-r]_q2 codes with r=2t-1≥ 5, r≠ 9,13, n=n_qq^t-2+2q^t-3, where t=3,4,6, and t≥8. For r=9,13, it holds that n=n_qq^t-2+2q^t-3+q^t-4+q^t-5. Now due to one-to-one correspondence between covering codes and saturating sets we obtain the corollary from Theorem <ref> and Proposition <ref>. We denoteΥ(q)=√((q+1)(3ln q+lnln q +ln3/4))+√(q/3ln q)+3.For the smallest size s(N,q) of a saturating set in the projective space (N,q) and for the length function ℓ (2,N+1,q), the following upper bounds hold:(i)s(N,q)=ℓ (2,N+1,q)≤Υ(q)· q^N-2/2+2q^N-4/2, N=2t-2≥ 4, N≠ 8,12,wheret=3,4,6, and t≥8,q≥79.(ii)s(N,q)=ℓ (2,N+1,q)≤Υ(q)· q^N-2/2+2q^N-4/2+q^N-6/2+q^N-8/2, N=8,12.By Theorem <ref>, in (2,q) there is a saturating set with size n_q=Υ(q).From the corresponding [n_q,n_q-3]_q2 code, one can obtain an [n,n-r]_q2 codes with parameters as in Proposition <ref>. The condition q+1≥ 2n_q holds for q≥79. Surveys of the known [n,n-r]_q2 codes and saturating sets in (N,q) with N even can be found in <cit.>. In many cases bounds (<ref>), (<ref>) is better than the known ones.99 Bartocci U. Bartocci, Dense k-systems in Galois planes, Boll. Un. Mat. Ital. D (6), 2(1), (1983) 71–77.BDGMP_SatSetArxiv D. Bartoli, A. A. Davydov, M. Giulietti, S. Marcugini, and F. Pambianco, On upper bounds on the smallest size of a saturating set in a projective plane, arXiv:1505.01426 [math.CO] (2015). https://arxiv.org/abs/1505.01426BDGMP_SatSet_Petersb D. Bartoli, A. A. Davydov, M. Giulietti, S. Marcugini, and F. Pambianco, New upper bounds on the smallest size of a saturating set in a projective plane. In: Proc. 2016 XV International Symposium Problems of Redundancy in Information and Control Systems (REDUNDANCY), Russia, St.-Petersburg, September 2016, pp. 18–22. http://ieeexplore.ieee.org/document/7779320/BDFKMP-PIT2014 D. Bartoli, A. A. Davydov, G. Faina, A.A. Kreshchuk, S. Marcugini, F. Pambianco, Upper bounds on the smallest sizeof a complete arc in PG(2,q)under a certain probabilistic conjecture, ProblemsInform. Transmission 50 (2014), 320–339.BDFKMP_ComplArc_JG2016 D. Bartoli, A. A. Davydov, G. Faina, A. A. Kreshchuk, S. Marcugini, and F. Pambianco, Upper bounds on the smallest size of a complete arc in a finite Desarguesian projective plane based on computer search, J. Geom. 107 (2016), 89–117.BFMP-JG2013 D. Bartoli, G. Faina, S. Marcugini, andF. Pambianco, On the minimum size of complete arcs and minimal saturating sets in projective planes, J. Geom. 104 (2013), 409–419.BFMP-JG2017 D. Bartoli, G. Faina, S. Marcugini, and F. Pambianco, A construction of small complete caps in projective spaces, J. Geom., to appear, DOI: 10.1007/s00022-016-0335-1BorSzTic E. Boros, T. Szőnyi, and K. Tichler, On defining sets for projective planes, Discrete Math. 303 (2005), 17–31.Handbook-coverings R. A. Brualdi, S. Litsyn, and V.S. Pless, Covering Radius, V. S. Pless, W. C. Huffman, and R. A. Brualdi (Eds.), Handbook of Coding Theory,vol. 1, pp. 755–826, Elsevier, Amsterdam, The Netherlands, 1998.BrPlWi R. A. Brualdi, V. S. Pless, and R. M. Wilson, Short codes with a given covering radius, IEEE Trans. Inform. Theory 35 (1989), 99–109.CHLS-bookCovCod G. Cohen, I. Honkala, S. Litsyn, and A. Lobstein, Covering Codes, North-Holland, Amsterdam, The Netherlands, 1997.DavCovCodeSatSetIEEE1995 A. A. Davydov, Constructions and families of covering codes and saturated sets of points in projective geometry, IEEE Trans. Inform. Theory 41, (1995) 2071-2080.DavCovRad2 A. A. Davydov, Constructions and families of nonbinary linear codes with covering radius 2, IEEE Trans. Inform. Theory 45 (1999), 1679–1686.DGMP_CovCodeNonBin_Pamporovo A. A. Davydov, M. Giulietti, S. Marcugini, and F. Pambianco, Linear covering codes over nonbinary finite fields. In: Proc. XI Int. Workshop on Algebraic and Combintorial Coding Theory, ACCT2008, Pamporovo, Bulgaria, June 2008, pp. 70–75. http://www.moi.math.bas.bg/acct2008/b12.pdfDGMP_CovCodeR23_Petersb2008 A. A. Davydov, M. Giulietti, S. Marcugini, and F. Pambianco, Linear covering codes of radius 2 and 3. In: Proc. Workshop “Coding Theory Days in St. Petersburg”, Saint-Petersburg, Russia, October 2008, pp. 12-17. ISBN 978-5-8088-0378-7 http://iitp.ru/upload/publications/1538/CoverPeter2008.pdfDGMP-AMC A. A. Davydov, M. Giulietti, S. Marcugini, andF. Pambianco, Linear nonbinary covering codes and saturating sets in projective spaces, Adv. Math. Commun. 5 (2011), 119–147.DMP-JCTA2003 A. A. Davydov, S. Marcugini, andF. Pambianco, On saturating sets in projective spaces, J. Combin. Theory Ser. A 103 (2003), 1–15.DavOst-EJC A. A. Davydov and P. R. J. Östergård, On saturating sets in small projective geometries, Europ. J. Combinatorics 21 (2000), 563–570.DavOst-IEEE2001 A. A. Davydov and P. R. J. Östergård, Linear codes with covering radius R = 2,3 and codimension tR, IEEE Trans. Inform. Theory 47 (2001), 416–421.FainaGiulG. Faina and M. Giulietti, On small dense arcs in Galois planes of square order, Discrete Math. 267 (2003), 113-125.GacsSzonyi A. Gács and T. Szőnyi, Random constructions and density results, Des. Codes Cryptogr. 47 (2008), 267–287.Giul-plane M. Giulietti, On small dense sets in Galois planes, Electronic J. Combin. 14 (2007), #75.Giul2013Survey M. Giulietti, The geometry of covering codes: small complete caps and saturating sets in Galois spaces,S. R. Blackburn, R. Holloway, and M. Wildon (Eds.), Surveys in Combinatorics 2013, London Math. Soc. Lect. Note Series, vol. 409, pp. 51–90, Cambridge Univ Press, 2013.GiulTor M. Giulietti and F. Torres, On dense sets related to plane algebraic curves, Ars Combin. 72 (2004), 33-40.Janwa1990 H. Janwa, Some optimal codes from algebraic geometry and their covering radii, Europ. J. Combin. 11 (1990), 249–266.KV J. H. Kim and V. H. Vu, Small complete arcs in projective planes, Combinatorica 23 (2003), 311–363. Kiss_CayleyG. Kiss, I. Kovács, K. Kutnar, J. Ruff, and P. Ŝparl,A note on a geometric construction of large Cayley graphs of given degree and diameter,Stud. Univ. Babes-Bolyai Math. 54 (2009), no. 3, 77–84.Kovacs S. J. Kovács, Small saturated sets in finite projective planes, Rend. Mat. (Roma) 12 (1992), 157–164.LobstBibl A. Lobstein, Covering radius, an online bibliography, http://perso.telecom-paristech.fr/126lobstein/bib-a-jour.pdfMP_Austr2003 S. Marcugini and F. Pambianco,Minimal 1-saturating sets in PG(2,q), Australas. J.Combin. 28 (2003), 161-169.Nagy Z. L. Nagy, Saturating sets in projective planes and hypergraph covers,arXiv:1701.01379 [math.CO] (2017) http://arxiv.org/abs/1701.01379SzonyiPhD T. Szönyi, Complete arcs in finite projective geometries. PhD thesis, Univ. L. Eötvös, Budapest, 1984.SzonyiSurvey T. Szönyi, Complete arcs in galois planes: a survey, Quaderni del Seminario di Geometrie Combinatorie 94, Dipartimento di Matematica “G. Castelnuovo”, Università degli Studi di Roma “La Sapienza”, Roma, January 1989.Ughi E. Ughi, Saturated configurations of points in projective Galois spaces, Europ. J. Combin. 8 (1987), 325–334.
http://arxiv.org/abs/1702.07939v1
{ "authors": [ "Daniele Bartoli", "Alexander Davydov", "Massimo Giulietti", "Stefano Marcugini", "Fernanda Pambianco" ], "categories": [ "math.CO", "cs.IT", "math.IT", "51E21, 51E22, 94B05" ], "primary_category": "math.CO", "published": "20170225192641", "title": "Upper bounds on the smallest size of a saturating set in projective planes and spaces of even dimension" }
Radio emission of young solar-type stars Fichtinger et al.B. Fichtinger, bibiana.fichtinger@univie.ac.at Institute of Astrophysics, University of Vienna, Türkenschanzstrasse 17, 1180 Vienna, Austria Department of Physics and Astronomy, University of Iowa, 203 Van Allen Hall, Iowa City, IA 52242, USA Department of Astronomy, California Institute of Technology, 1200 E. California Blvd., Pasadena, CA 91125, USA Department of Geology and Geophysics, University of Hawaii, Honolulu, Hawaii, HI 96822, USA Center for Astrophysics and Space Astronomy, University of Colorado, Boulder, CO 80309-0389, USAObservations of free-free continuum radio emission of four young main-sequence solar-type stars (EK Dra, π^1 UMa, χ^1Ori, and κ^1 Cet) are studied to detect stellar winds or at least to place upper limits on their thermal radio emission, which is dominated by the ionized wind. The stars in our sample are members of The Sun in Time programme and cover ages of ∼ 0.1 - 0.65 Gyr on the main-sequence. They are similar in magnetic activity to the Sun and thus are excellent proxies for representing the young Sun. Upper limits on mass loss rates for this sample of stars are calculated using their observational radio emission. Our aim is to re-examine the faint young Sun paradox by assuming that the young Sun was more massive in its past, and hence to find a possible solution for this famous problem. The observations of our sample are performed with the Karl G. Jansky Very Large Array (VLA) with excellent sensitivity, using the C-band receiver from 4 - 8 GHz and the Ku-band from 12 - 18 GHz. Atacama Large Millimeter/Submillitmeter Array (ALMA) observations are performed at 100 GHz. The Common Astronomy Software Application (CASA) package is used for the data preparation, reduction, calibration, and imaging. For the estimation of the mass loss limits, spherically symmetric winds and stationary, anisotropic, ionized winds are assumed. We compare our results to 1) mass loss rate estimates of theoretical rotational evolution models, and 2) to results of the indirect technique of determining mass loss rates: Lyman-α absorption. We are able to derive the most stringent direct upper limits on mass loss so far from radio observations. Two objects, EK Dra and χ^1Ori, are detected at 6 and 14 GHz down to an excellent noise level. These stars are very active and additional radio emission identified as non-thermal emission was detected, but limits for the mass loss rates of these objects are still derived. The emission of χ^1Ori does not come from the main target itself, but from its M-dwarf companion. The stars π^1 UMa and κ^1 Cet were not detected in either C-band or in Ku-band. For these objects we give upper limits to their radio free-free emission and calculate upper limits to their mass loss rates. Finally, we reproduce the evolution of the Sun and derive an estimate for the solar mass of the Sun at a younger age.Radio emission and mass loss rate limits of four young solar-type stars Bibiana Fichtinger 1 Manuel Güdel 1 Robert L. Mutel 2 Gregg Hallinan 3 Eric Gaidos 4 Stephen L. Skinner 5 Christene Lynch 2 Kenneth G. Gayley 2 Received date / Accepted date ===================================================================================================================================================§ INTRODUCTION Geological evidence suggests that the early Earth had a warmer climate in the first few 100 Myr of its evolution. Such a mild and warm climate on the early Earth 4 Gyr ago was necessary and essential for the evolution and formation of life on our planet <cit.>. However, solar standard models predict a lower bolometric luminosity of the Sun at that time, being just 70% the present-day luminosity. The evolution of the Sun's luminosity had an important effect on the formation of the atmosphere for our Earth and for early Mars. Without an atmosphere on Earth, the average surface temperature would have been 235 K only. Additional present-day greenhouse gases would have raised the temperature to ∼ 253 K, which is still not enough to avoid the completely frozen surfaces on early Earth and Mars <cit.>. The discrepancy between the implications from the solar standard models and the geological evidence for a warmer climate on Earth is defined as the “faint young Sun paradox” (FYSP). Apart from a number of proposed solutions of the FYSP <cit.>, an astrophysical solution for this problem has been suggested. It assumes that the young main-sequence Sun was brighter than suggested by the standard model, which would be possible if it had been more massive than today and consequently suffered from an increased mass loss during its early main-sequence life through an enhanced solar wind <cit.>. Winds play an important role in stellar evolution for main-sequence stars like the Sun, especially for the stellar angular momentum. We know that stars spin down with age, because angular momentum is carried away by the magnetized, ionized winds. To understand the mechanism of the interaction between the stellar wind, stellar rotation, and the magnetic field for stars with various ages, information on how winds evolve with time is required. Furthermore, the evolution of stellar winds is important for the evolution of planetary atmospheres and their erosion <cit.>. Most of what we know about stellar winds comes from studies of the solar wind, although the mechanisms for generating, accelerating, and heating the solar wind are still poorly understood <cit.>. Today, the most common way to assess stellar winds and therefore, determine stellar mass loss rates, is to observe the Lyman-α excess of the neutral interstellar hydrogen in high-resolution Hubble Space Telescope spectra of stars, as introduced by <cit.>, <cit.>, and <cit.>. Due to charge exchange interactions between neutral interstellar hydrogen and the ionized wind, a "wall" of hot neutral hydrogen at the edge of the stellar astrosphere is built up. The detected material is not from the fully ionized wind itself, which has no HI, but is interstellar HI instead that is heated within the interaction region between the wind and the local interstellar medium. The amount of astrospheric HI absorption provides diagnostic information on the rate of mass loss of the wind, specifically the momentum in the wind. In <cit.> the correlation of the mass loss rates with coronal properties was studied. The coronal X-ray luminosity is a good indicator for the magnetic activity of a star and the scaling relationship between the mass loss rate per unit surface area and the X-ray surface flux is Ṁ∝ F_X^1.34± 0.18, which, combined with X-ray luminosity evolution versus time L_X∝ t^-1.5 <cit.>, suggests that the mass loss rate decreases with time for solar-like stars like Ṁ∝ t^-2.33±0.55 <cit.>. The correlation is, however, still not sufficient to solve the FYSP <cit.>. However, as these authors concluded in their study, this correlation fails for the youngest and most active stars for which winds appear to be very weak in Lyman-α observations. <cit.> observed χ^1 Ori, but they were unable to provide any astrospheric detections in their study. They argued that a non-detection does not generally provide a meaningful upper limit to the stellar wind strength, because for non-detections the star could be surrounded by a fully ionized interstellar medium (ISM) <cit.>.Observing and detecting stellar winds similar to the solar case is important to improve our understanding of stellar evolution, such as the correlation between rotation and stellar magnetic activity which provides information on the dynamo and thus magnetic activity. Furthermore, the understanding of acceleration mechanisms of these winds could be improved and the measurements of wind properties of stars with different ages may provide essential information on stellar angular momentum loss. Radio observations of young, solar-type stars are used in our study to test if there was a strong mass loss in the young Sun. A study of the "radio Sun in time", complementing the "X-ray Sun in time" <cit.>, can explore the range and the long-term evolution of solar and stellar magnetic activity and wind mass loss. First detections of radio emission from low-mass main-sequence stars were reported by <cit.> and <cit.>. Limits to mass loss have already been established from radio observations of more massive A and F stars <cit.> and active, less massive M stars <cit.>. <cit.> observed early type O and B supergiants to make a detailed comparative study of the mass loss evaluated from Hα and radio continuum observations. <cit.> and <cit.> used the Very Large Array (VLA) to search for radio emission of the active, young, solar-type stars π^1 UMa, κ^1 Cet and β Com at 8.4 GHz. Their observations resulted in 3 σ detection limits of 20 - 30 μ Jy, which correspond to radio luminosities of ∼ 10^12.5 ergs^-1 Hz^-1 <cit.>. Early radio observations of EK Dra were recorded in <cit.>, where at minimum the 8.4 GHz flux was (34 ± 11)μ Jy, and at intermediate levels (77 ± 9)μ Jy.To derive an estimate or upper limit for the enhanced young solar wind, we observed radio emission of young, solar-like analogues at the main-sequence with the Karl G. Jansky Very Large Array (VLA) and the Atacama Large Millimeter/Submillimeter Array (ALMA). If we are able to detect free-free radio emission of such winds, their mass loss rates can be calculated. From climate predictions the initial (zero-age main-sequence, ZAMS) solar mass is required to be in the range of 1.03 - 1.07 M_⊙ if it were to solve the FYSP <cit.>, suggesting an enhanced early wind mass loss of the order of 10^-12 - 10^-10 M_⊙yr^-1. In comparison, the present-day solar wind mass loss amounts to 2 × 10^-14 M_⊙yr^-1 <cit.>.In this paper we focus on the four young solar analogues EK Dra,π^1 UMa, χ^1Ori, and κ^1 Cet using the upgraded sensitivity and resolution of the VLA. In Section <ref>, we briefly describe the observations including a description of our targets. Section <ref> contains the results of our detections and upper limits of radio emission. The calculation of the mass loss rates of our star sample will be described in Section <ref>, where we will also compare our observational results to results from Lyman-α absorption, presented by <cit.>.§ OBSERVATIONS§.§ Target sampleOur target sample includes the following objects (see also Table <ref> summarizing the properties of our stars):EK Dra: This is a G1.5 V star that is considered to be among the most active solar analogues in our neighbourhood, with a distance of 34 pc from the Sun. The average rotation period is 2.68 days. Main properties are reviewed by <cit.> and <cit.>. <cit.> adopted an age of about 100 Myr for this near-ZAMS star.π^1 UMa: This is a young, active G1.5 V solar proxy with a rotation period of about 4.9 days <cit.> and a distance of 14.3 pc. In the Sun in Time programme, π^1 UMa is reported to have an age of  300 Myr <cit.>.χ^1 Ori: A G1V star with a rotation period of about 5.2 days <cit.>, a distance of 8.7 pc and an age of  300 Myr <cit.>.The star χ^1 Ori is classified as a member of the Ursa Major moving group <cit.>.κ^1 Cet: With a spectral type G5 V, it is the coolest star in the sample, with a distance of 9.2 pc from the Sun. <cit.> determined spectroscopic parameters. The rotation period is reported by <cit.> to be about 9.2 days and the age is suggested to be around 650 Myr <cit.>. §.§ VLA and ALMAFor the radio measurements we use the Karl G. Jansky VLA, a radio interferometer located in New Mexico near Socorro, operated by the National Radio Astronomy Observatory (NRAO). We use C-band (4 - 8 GHz, 6 cm) and Ku-band (12 - 18 GHz, 2 cm) receivers. The Jansky VLA operates with an increased sensitivity relative to the VLA. The observations were performed in C configuration in sessions in spring/summer 2012 and 2013. The Common Astronomy Software Application (CASA) developed by the NRAO has been used for inspecting, editing (including flagging), calibrating, and imaging the data sets. Flux calibrators were observed at the beginning of each observation for several minutes and the phase calibrators were repeatedly observed together with the targets. An overview and summary of the observations is given in Table <ref>. For the calibration, the raw data needs to be inspected first, which means that bad data due to antenna errors, shadowed antennas, or poor weather conditions need to be flagged, that is removed from the data set. Afterwards, flux, bandpass, and gain calibration steps are applied. We used a pipeline for VLA data[VLA Calibration Pipeline: https://science.nrao.edu/facilities/vla/data-processing/pipeline] that deals with the flagging and calibration. We used this pipeline, but additional flagging was necessary afterwards.The Atacama Large Millimeter/submillimeter Array (ALMA), located in the Chajnantor plain of the Chilean Andes, was used to observe in band 3 (with a bandwidth of 84-116 GHz) at 100 GHz in December 2013 within Cycle 1. We got observing time for one of our targets, χ^1 Ori. For the ALMA data the NRAO staff provided prefabricated scripts together with our data for flagging and calibration. In the meantime, a calibration pipeline for ALMA has been developed as well.[ALMA Pipeline: http://casa.nrao.edu/casa_obtaining.shtml.] For our data analysis, we used these pipelines, but some extra flagging and a second run through the pipeline calibration were necessary.From NRAO's exposure calculator for the VLA, the theoretical noise sensitivity with 2 GHz bandwidth, 27 antennas, and one hour on source is calculated to be around 3.5 μ Jy rms in C-band and around 3.8 μ Jy rms in Ku-band. These values represent the expected random noise levels for π^1 UMa and κ^1 Cet, respectively. Except for π^1 UMa in C-band, the achieved noise levels are in good agreement with the expected values (see Table <ref>). For EK Dra with a bandwidth of 3.5 GHz and 26 antennas, we would expect a noise level of 2.2 μ Jy in C-band and 2.9 μ Jy in Ku-band, whereas the achieved values are slightly higher. For χ^1 Ori, 3.5 GHz bandwidth and 26 antennas, the theoretical rms is 1.6 μ Jy in C-band, which is in good agreement with the observational noise. In Ku-band the expected noise is around 2.2 μ Jy, whereas the achieved rms is lower, namely 1.6 μ Jy. The CLEAN procedure is applied to produce images, where the Clark algorithm with natural weighting was chosen for the setting. Depending on the wavelength and the number of antennas, a cell size of 0.7 and 0.3 for C and Ku-band was used, respectively.§ RESULTS§.§ ImagesFor each target several observation sets are available. To obtain the final images of each target, all data sets in each frequency band are combined to concatenated images which are shown for the detections in Figs. <ref> and <ref>. The crosses mark the expected positions of the sources predicted from the SIMBAD Astronomical Database[http://simbad.u-strasbg.fr/simbad/.] corrected for proper motion to the epoch of observation. Minor offsets in right ascension and declination for EK Dra and χ^1Ori occur in our analysis. We note that the field of view in the Ku-band images is much smaller than the C-band images. Several sources can be identified in all images, but only two of the four objects of the sample show a radio detection signal at the expected positions. The targets EK Dra and χ^1Ori, shown in Figs. <ref> and <ref>, are detected in Stokes I (total intensity), with a total flux of around 100 μ Jy.On the other hand, π^1 UMa and κ^1 Cet (not shown) display non-detections at the expected target positions both in C and Ku-band. §.§ Radio emission from the stellar windThe free–free spectrum from thermal bremsstrahlung radiation is characterized by a power-law spectral index α, ranging from-0.1 ≤α≤ 2, where the flux density is given by S_ν∝ν^α at the frequency ν. The integrated flux densities of the detections are determined by fitting the stellar images by a Gaussian profile. The associated rms values in Stokes I and V (circularly polarized intensity) in both wavelength bands are given in Table <ref>. For those objects for which no detection was observed, the 3σ upper limit to the flux density from a source-free background region is estimated. Time series of the sources provide information on the variation of each observation interval. For each object, separate images for each time interval of about five minutes in the Stokes I/V plane are created and hence, the time-dependent flux and the related rms are extracted. Because the observing time is much shorter than the stellar rotation periods, no rotational modulation should be seen. On the other hand, short time variations of a few minutes are indicators for flares. The results for the four targets are summarized in the following sections. §.§.§ EK DraWe obtained a clear detection of EK Dra at 14^h 38^m 59^s.96, +64^∘ 17^' 29^'' .49. The offsets from the predicted positions in C-band are 0.02s in right ascension and -0.19 in declination, and 0.04s and 0.84 in Ku-band, well within the beam size of 4.96 × 3.43 and 2.89 × 2.26, respectively. Because EK Dra is a very active star we expect that the radio emission will include coronal emission <cit.>. The Stokes I radio flux was 593 ± 1.7 μ Jy with an rms of 3.4 μ Jy in C-band. Judging from the light curve, no flare event seems to be present. In Ku-band the radio emission is observed at 73 ± 2.4 μ Jy with an rms of 4 μ Jy. The star EK Dra's radio emission cannot be only thermal free-free emission as also argued by <cit.>, given the variability and the high flux level. The polarization degree r_c = V/I, which ranges from -1 to 1, is found to vary in the range r_c = [-0.088, -0.015] in C-band. In Ku-band our observation does not show any significant non-zero Stokes V flux. §.§.§ π^1 UMaThe star π^1 UMa, expected at 08^h 39^m 11^s.65, +65^∘ 01^' 16^'' .46, is a non-detection and was already studied by other authors <cit.>. The 3σ upper limits of the integrated radio intensities are 23.1 μ Jy in C-band and 6.3 μ Jy in Ku-band. The C-band intensity limit is high compared to the Ku-band results because, despite heavy flagging and cleaning, the residual of a strong source strongly perturbs our object region and consequently raises the rms. During the observation the fringe pattern directly crossed the expected position of π^1 UMa and caused an increase in the background noise and hence negatively influenced the radio emission estimation for π^1 UMa. Therefore, the radio flux density upper limit in C-band is not as useful as desired. On the other hand, the observations of the Ku-band flux density upper limit of 6.3 μ Jy are excellent and useful for further analysis and interpretation. The polarization map also shows only noise. <cit.> reported a non-detection at the location of π^1 UMa as well. They placed a 2σ upper limit of 12μ Jy at 3.6 cm (X-band) for the total flux density. Our VLA observations lower these upper limits by a factor of around two. §.§.§ χ^1 OriThe star χ^1 Ori is located at 05^h 54^m 22^s.78, +20^∘ 16^' 33^'' .58 in our observations. It shows strong radio emission, seen with offsets in C-band of -0.06s in right ascension and 0.62 in declination, relative to the expected position (cross in Fig. <ref>a) using Hipparcos[http://archive.ast.cam.ac.uk/hipp/hipparcos.html.] measurements(<cit.>). In Ku-band the offsets to the observational positions are -0.07s in right ascension and 0.61 in declination. The integrated radio flux densities in Stokes I as given in Table <ref> are 110 ± 0.7 μ Jy with an rms of 1.8 μ Jy in C-band and 117 ± 2.7 μ Jy and a corresponding rms of 1.6 μ Jy in Ku-band. The flux density at 100 GHz measured with ALMA is 103 ± 4.9 μ Jy in Stokes I. The proper motion corrected offset in right ascension is around -0.03s, in declination it is 0.29. The C-band light curve shows a flare that can be clearly identified during an observation interval with a duration of less than 30 minutes (see Fig. <ref>). The peak reaches a flux density about three times the quiescent level. The occurrence of the flare and the fact that the slope of the spectrum is slightly negative with increasing frequency, suggest that the radio emission of χ^1 Ori is not exclusively thermal radio bremsstrahlung from a wind but is dominated by gyrosynchrotron emission from accelerated electrons. The third indication supporting this assumption is a ≈ 10% Stokes V signal (see Table <ref>). The degree of circular polarization r_cis in the range r_c = [-0.35, 0.63] with a maximum sigma σ = 0.18 in C-band and r_c = [-0.68, 0.72] with σ = 0.49 in Ku-band. For ALMA, no Stokes V measurements were available for Cycle 1 data sets.The images of χ^1 Ori show that the corrected coordinates (black crosses in Fig. <ref>) do not properly match with the observational positions from the Gaussian fit (black dots). Therefore, we analyzed if the radio signal may come from the M-dwarf companion of χ^1 Ori <cit.>. To derive the position of χ^1 Ori B, the orbit of χ^1 Ori has to be corrected first. The orbital parameters are taken from <cit.> and <cit.>. By correcting the orbit from JD1991.25 to JD2012.4 when our VLA observations took place and by including the correction for proper motion from Hipparcos, the expected coordinates for χ^1 Ori are derived. The position of the companion is determined by using the mass ratio between primary and companion, and is displayed by the red cross in the images of Fig. <ref> and listed in Table <ref> (in C-band only). The two components are separated by 0.49from each other. Some systematic errors occur from Hipparcos itself, especially because Hipparcos did not recognize the binarity of χ^1 Ori, and errors in proper motion and possible position errors of the phase calibrator during the observation may contribute to the residual deviation of the detected coordinates. We checked for new Gaia position measurements,[http://gea.esac.esa.int/archive/.] but unfortunately there is no data available for χ^1 Ori. If the companion is responsible for the radio emission, which seems likely, we will still use the observational radio emission ofχ^1 Ori A or B for our further analysis and mass loss rate calculations considering it to be an upper limit to the thermal wind emission. §.§.§ κ^1 CetAnother non-detection is κ^1 Cet expected at 03^h 19^m 21^s.93, +03^∘ 22^' 13^'' .99. We therefore report upper limits for the radio emission. Because of the high sensitivity of the VLA, the surrounding noise can be measured at a very low level although a strong source showing up in the C-band image disturbs the field and contributes to the rms even after careful cleaning. The 3σ rms noise level is used for an upper limit to the radio emission, which is 9 μ Jy both in C-band and Ku-band. §.§ Chromospheric emissionWe expect that the emission from the stellar chromosphere is small compared to the wind emission. Nevertheless, we estimate the emission from the stellar disk of the star to occur in the chromosphere <cit.>. For an optically thick chromosphere at 10 GHz we assume a temperature of 20 000 K <cit.>. At 100 GHz we expect a lower temperature of typically 10 000 K, although it can be even lower. Furthermore, we assume that the entire surface of the star is covered by chromospheric emission. Using the standard formula for the radio flux from a blackbody with brightness temperature T, the predicted flux density at 100 GHz is S_ν = 4.94 · 10^-26/d^2ergcm^-2 s^-1 Hz^-1 . For χ^1 Ori at the distance of d = 8.7 pc the flux density is 65 μJy for ALMA (100 GHz). Hence, part of the emission observed with ALMA can be of chromospheric origin, but it is probably not the only emission source and cannot explain the detected 100 μJy alone. At 10 GHz the expected maximum flux density is 1.3 μJy only, and therefore not significant for the VLA detections. For the non-detections, a chromosphere could probably be detected with deeper observations, as in <cit.>. § MASS LOSS RATES§.§ Spherically symmetric windsRadio flux density measurements can provide estimates for mass loss rates. The radio free-free flux spectrum for an optically thick, constant velocity, fully ionized isothermal spherical wind is predicted to be of the form <cit.>: S_ν = 0.9 × 10^11 (Ṁ/v)^4/3 T^0.1 ν^0.6 d^-2mJy, where Ṁ is the mass loss rate in M_⊙ yr^-1, T the temperature of the plasma in K, ν the frequency in GHz,cmrmitv the wind velocity in kms^-1 , and d the stellar distance in pc. At any frequency one essentially sees emission from gas down to a level where the gas becomes optically thick. <cit.>argue that deviations from α_op = 0.6 (which is the exponent of ν in Eq. <ref>) may be caused either by variability due to non-uniform mass loss rates or by an increasing fraction of neutral gas with distance responsible for the radio emission. Using this formula and assuming a temperature of T = 10^6 K and an average wind velocity of cmrmitv = 400 kms^-1, the mass loss rate for an (optically thick) wind of π^1 UMa would be Ṁ≤ 1.1 × 10^-10 M_⊙ yr^-1 for C-band and Ṁ≤ 2.9 × 10^-11 M_⊙ yr^-1 for Ku-band. The starκ^1 Cet would show a mass loss rate of Ṁ≤ 2.8 × 10^-11 M_⊙ yr^-1 for C-band and Ṁ≤ 1.9 × 10^-11 M_⊙ yr^-1 for Ku-band with the same assumed temperature and velocity profiles. Apart from spherically symmetric (isotropic) winds we will also discuss the possibility of anisotropic, collimated "jet" flows below.§.§.§ Radiative transfer equation for non-isothermal winds A point we have to consider is that the temperature in the solar wind (and presumably in winds from other stars) is not constant but decreases with distance r. Close to the surface the wind is dense and hot but it cools as it expands. This radial temperature can be roughly described by a T ∝ r^-0.5 power law <cit.>. Because of this, we wanted to study the case for variable temperature and therefore re-formulated the general radiation transfer equation.As described in <cit.>, the intensity I_ν(ξ)from any line of sight in local thermodynamic equilibrium (LTE) is given by I_ν(ξ) = B(ν) (1 - e^-τ(ξ)), where ξ is the distance from the surface of the star out to a boundary of about 200 stellar radii (to ensure that the entire emission region is contained in the calculation volume) measured in the plan perpendicular to the line of sight. A grid for the temperature and density at each grid point was constructed. Emission and absorption were determined for each grid cell at a given distance from the source to create a ring structure with radius ξ around the source. Moving out to several stellar radii, the contributions from the ring elements are summed up, where the region behind the star is excluded. The optical depth along any line of sight is calculated using: τ_ν (s) = ∫_s^∞ n^2 κ_ν(T)ds, where κ (ν) is defined as in <cit.>:κ (ν) = 8.436 × 10^-28[ ν/ 10GHz] ^-2.1[ T_e/10^4K] ^-1.35 . Taking the full geometry into account, we finally obtain a variable temperature transfer equation that can be easily solved numerically. Results are displayed in Fig. <ref>, shown as the red line. The black spectrum displays the solution for a constant temperature. We see that a variable temperature causes minor changes in the steepness of the spectrum which may lead to a slightly higher flux density and may influence the derived mass loss rate. This is probably because of the n^2 dependence of Eq. <ref> and the strong dependence of the density on distance and hence the temperature, and thus most emission is from very close to the star. Using the equation of mass continuity Ṁ = 4 π r^2ρ v, the mass loss rate will be approximately 1.1 - 1.6 higher if the temperature is assumed not to be constant. The change in mass loss implied by variations in the temperature is relatively small compared to those due to a change in velocity. The mass loss rate would be enhanced by a factor of about two if the velocity (see Eq. <ref>) changed from cmrmitv = 400kms^-1 to cmrmitv = 800kms^-1.§.§ Conical windsThe stars in our sample are very young and active, hence we investigate anisotropic, collimated winds where the magnetic activity is concentrated at the poles <cit.>. <cit.> showed that a well-collimated ionized flow can display a behaviour quite different from that of quasi-spherical flows and they calculated the thermal continuum emission from collimated, ionized winds ("jets") in the presence of gradients in jet width, velocity, ionization, and temperature.Because the structure in continuum source spectra contains much information about the flow physics, it is important to get a good frequency coverage of the target sample. The total radio flux of a collimated stellar wind given by <cit.> is: S_ν = ∫_y_0^y_max[ 2w(r)/d^2] ( a_j/a_κ T ν^2) (1 - e^-τ) dy. Here, y is defined as y = r sin i with r being the length of the jet and i its inclination (see Fig. 1 in <cit.>). The half-width of the jet is described with cmrmitw(r), d is the distance to the source, T the temperature, ν the frequency, and τ the optical depth based on the wind density, velocity, and temperature. The constantsa_j = 6.50 × 10^-38 and a_κ = 0.212 link the free-free emission and absorption coefficients: j_ν/κ_ν = a_j/a_κ T ν^2. The jet half-width, optical depth, temperature, velocity, and density are assumed to vary with r/r_0 like power laws with indices ϵ,q_τ, q_T, q_v, and q_n, respectively. The velocity and density indirectly contribute via their indices to the optical depth in Eq. <ref>. Different values are assigned to each parameter, depending on the model type <cit.>, and these quantities are summarized in Table <ref>. For example for a constant-velocity, fully ionized, adiabatic jet the exponents are chosen to be ϵ = 1, q_n = -2, q_T = -4/3, q_v = 0, and q_τ = -1.2 (model B, see Table <ref>). These variations change the spectral index α_op to 0.83 for a non-isothermal jet instead of α_op = 0.6 for isothermal flows. Calculating Eq. <ref> numerically for the properties of π^1 UMa with T = 10^6 K, n = 2 × 10^10 cm^-3, an opening angle of 40^∘ (centred at the pole) and using the standard spherical model quantities (model A), the total flux spectrum is determined and is displayed in Fig. <ref>, where it reveals a positive slope of around α_op = 0.6 for the optically thick wind and a change to α = -0.1 at high frequencies for the optically thin regime. To derive upper limits for mass loss rates, different values for velocity and temperature for a standard spherical jet flow, that isα_op = 0.6, are applied. It is clear that faster but cooler winds lead to stronger mass loss rates. Upper limits for mass loss rates for all three model types, the standard spherical, adiabatic spherical, and adiabatic collimated flow, are calculated using a constant-velocity wind of cmrmitv = 400 kms^-1 with the following <cit.> formula:Ṁ_-6 = 0.938 v_8 x_0^-1( μ/m_p)(S_mJyν_10^- α_op)^3/4 d_kpc^3/2ν_m10^-0.45+3 α_op /4 ·θ_0^3/4 T_4^-0.075 (sini)^-1/4 F^-3/4 ,where Ṁ_-6≡Ṁ/10^6M_⊙ yr^-1, v_8≡ v/10^8cms^-1, ν_10≡ν/10^10Hz, T_4≡ T/10^4K and F ≡2.1^2/q_τ(α_op - 2)(α_op +0.1) . The frequency ν_m10 is defined as the turnover frequency where the source becomes completely transparent <cit.> and S_mJy is the observed radio flux of our objects in mJy. Table <ref> summarizes the maximum mass loss rates for π^1 UMa and κ^1 Cet at 6 GHz and 14 GHz for all three models by changing the model parameters, a temperature of T = 10^6 K, a velocity of cmrmitv = 400kms^-1 , and an opening angle of 40^∘.If we assume that the mass loss rates are a function of the opening angle, they increase with increasing opening angle. For example, the mass loss rate with an opening angle of 20^∘ is Ṁ≤ 3.0 ×10^-12 M_⊙ yr^-1 for π^1 UMa for Ku-band. Enlarging the angle to 60^∘ the mass loss rate increases to Ṁ≤ 6.7 ×10^-12 M_⊙ yr^-1. A higher velocity of cmrmitv = 800kms^-1 would raise the mass loss rates by a factor of two. Although we are not able to detect any radio emission signal for π^1 UMa and κ^1 Cet, we can thus give meaningful upper limits to the mass loss rates of these young stars within a range of reasonable wind opening angles and wind temperatures.As already mentioned, additional coronal, partly flaring radio emission for EK Dra and χ^1 Ori was detected and identified as non-thermal emission, but we can nevertheless provide meaningful upper limits by adopting the detected non-thermal flux densities as upper limits to the thermal emission. We calculate the maximum mass loss of both stars for a spherically symmetric and a conical wind, as done for the non-detections. These mass loss rates for EK Dra and χ^1 Ori in both frequency bands are summarized in Table <ref>. §.§ Absorption of the wind due to flares The presence of flares and polarized emission on EK Dra and χ^1 Ori imply that any radio contribution from winds must be significantly lower than the detected radiation. The non-thermal and flare emission originate close to the surface of the star. The fact that it is detectable implies that the stellar wind is optically thin to this radiation. An assessment for the maximum mass loss possible for an optically thin wind was suggested in <cit.>. A stronger wind would completely absorb the observed radiation from coronal radio flares. The radius at which a spherically symmetric wind becomes optically thick at a given frequency ν can be derived from the expression <cit.>: R(ν)/R_⊙≈ 6 ( ν/10 GHz)^-2/3( T/10^4K)^-1/2 ×( Ṁ/10^-10M_⊙ yr^-1)^2/3( v_∞/300 kms^-1)^-2/3 . Because the non-thermal emission from the star must originate above the optically thick surface at the observing frequency to be detectable, we set R(ν) to R_∗. Assuming the terminal velocity to be cmrmitv_∞ = 400 kms^-1 and the temperature T = 10^6 K, and solving Eq. <ref> for Ṁ,we find a maximum wind mass loss rate of Ṁ≤ 1.3 × 10^-10 M_⊙ yr^-1 for C-band and Ṁ≤ 6.9 × 10^-10 M_⊙ yr^-1 for Ku-band for EK Dra. With the same velocity and temperature profiles for χ^1 Ori, a wind with Ṁ≤ 1.3 × 10^-10 M_⊙ yr^-1 for C-band at 6 GHz and Ṁ≤ 7.2 × 10^-10 M_⊙ yr^-1 for Ku-band at 14 GHz is derived. Comparing these mass loss rates to those derived for a spherically symmetric and a conical wind, respectively,as given in Table <ref>, we see that the estimates are similar. We keep the conical wind mass loss results as upper limits for EK Dra and χ^1 Ori.§.§ Rotational evolution As magnetized stellar winds remove angular momentum from their host stars and therefore force stars to spin down <cit.> and cause a decrease in rotation rate and magnetic activity as they age <cit.>, it is essential to consider rotational evolution when determining mass loss rates of young, active stars. Several solar wind models <cit.> and rotational evolution models <cit.> have been developed. <cit.> developed a solar wind model to estimate the properties of stellar winds for low-mass main-sequence stars between masses of 0.4 M_⊙ and 1.1 M_⊙ at a range of distances from the star based on stellar spin-down and angular momentum loss in a magnetized wind. They used 1D thermal pressure-driven hydrodynamic wind models using the Verstile Advection Code <cit.> and in-situ measurements of the solar wind. The stellar mass loss rate can then be calculated with Ṁ = Ṁ_̇⊙̇ R^2Ω^1.33 M^-3.36 , where all quantities are in solar units with the Carrington rotation rate of Ω_⊙ = 2.67 × 10^-6 rads^-1. Graphically, this relation is shown in Fig. 10 in <cit.>. Applying this formula to our four objects we are able to calculate their mass loss rates considering their rotational evolution, shown as red filled circles in Fig. <ref>. These values follow a Ṁ∝ t^-0.75 relation <cit.> and are about two orders of magnitude lower than our upper limits displayed as arrows, but we emphasize that these results are indirect inferences from models. §.§ Early mass loss of the SunHow did the solar wind evolve over time? For a simple evaluation of the total early solar mass loss, power laws are placed through the sample of young, solar-type stars observed in this study. The upper limits of the mass loss rate for the conical wind (α = 0.6) of the two non-detections of π^1 UMa and κ^1 Cet in Ku-band and the solar wind mass loss rate are used to define a piecewise power law through the sample. These relationships are shown in Fig. <ref>. Additionally, the mass loss rates from rotational evolution are marked as red circles. Although EK Dra and χ^1 Ori are marked in the plot, they are not used for the evaluation of the power laws, since these Ṁ are estimated based on the detected non-thermal radiation. The power laws are extrapolated from 0.3 Gyr down to the age of 0.1 Gyr. First, we apply our results to spherically symmetric winds with the corresponding power laws, which give an upper limit to the solar mass loss of 2.02 % after the integration from 100 Myr to 4.5 Gyr, resulting in an initial solar mass of 1.02 M_⊙. Conical winds (using α_op = 0.6) follow similar power laws as shown in Fig. <ref>:Ṁ∝ t^-0.46 from0.1to0.65Gyr Ṁ∝ t^-2.66from0.65to4.5Gyr.<cit.> also estimate a mass loss rate versus time resulting in a power law index of -1.1, lying below our mass loss upper limits but above the model calculations for rotational evolution, shown as the blue solid line in Fig. <ref>. The relation of <cit.> as given in Eq. <ref> is shown as the red line in the figure. The dashed part of the line marks the age region where the relation fails for most of young and active stars. Furthermore, <cit.> find a weaker power law relation of mass loss rate versus age based on magnetohydrodynamics (MHD) simulations, resulting in Ṁ∝ t^-1.37.After the integration in time from 100 Myr to 4.5 Gyr, the total mass is in our case for the above given power laws at most 0.4 % (α_op = 0.6) higher than at present, depending on the model for the spectral index α_op, resulting in a solar mass of 1.004 M_⊙ only. Considering the theoretical model calculation for rotational evolution, the solar mass would be even lower at 1.0002 M_⊙. The boundaries necessary for solving the FYSP are at 3% to 7% total mass loss, required to keep liquid water on early Mars and to control and avoid the runaway greenhouse effect on Earth at early stages up to a few 100 Myr <cit.>. Our limits for the spherically symmetric and conical wind models are definitely below the 3% boundary and therefore imply that the faint young Sun problem cannot be solved by assuming increased wind mass loss rates and therefore a higher mass for the young Sun. § SUMMARY AND DISCUSSIONIn this study, we analyzed four young, solar-type stars on the main-sequence of different ages, which are part of the Sun in Time programme to study the decline of magnetic activity and wind mass loss in solar analogues. For the analysis, observations of the VLA at 2 cm and 6 cm wavelength and ALMA at 100 GHz are used, aiming to detect thermal radio emission, that is free-free radio bremsstrahlung, which is indicative of the existence of a stellar wind. The well-studied analogues of the Sun cover the young evolutionary stages on the main-sequence. Our sample of four stars results in two detections: EK Dra and χ^1 Ori; and two non-detections: π^1 UMa and κ^1 Cet. For both detections we can conclude that the radio emission is not thermal bremsstrahlung alone, but consists of additional coronal radio emission in the form of non-thermal, partly flaring emission. Indicators for that assumption are a negative slope of the radio spectrum, the presence of flares seen in the light curves, and the presence of circular polarization. Furthermore, we have argued that instead of χ^1 Ori, we have actually detected its M-dwarf companion. For the non-detections π^1 UMa and κ^1 Cet, we can estimate their maximum wind radio emission flux densities by placing the 3 σ rms value as upper limits. We have to clearly state that we cannot rule out other contributing but also undetected emission processes in these sources.The estimated radio emissions are used to derive upper limits to mass loss rates for the observed targets. Mass loss rates are important quantities for the study of the evolution of young stars including the Sun. They could possibly result in an explanation and solution for the problem of the famous FYSP. Furthermore, the evolution of mass loss rates of the young Sun leads to essential information for the formation and evolution of the atmospheres of the early Earth and other planetary atmospheres.We estimate mass loss rates for all targets for spherically symmetric and anisotropic collimated winds. We note that any additional neutral wind component would increase the mass loss rate, but such winds would not be detected by our methods. However, the solar wind is essentially fully ionized, so we assume the same for solar analogues. We applied three different model types (standard spherical, adiabatic spherical, and adiabatic collimated) for the mass loss rate calculation by changing the different parameter quantities. If we vary the velocity and the temperature, we see that the mass loss rate increases for a cooler and faster wind. The star EK Dra's mass loss is estimated to be Ṁ≤ 1.3 × 10^-10 M_⊙ yr^-1 for C-band and Ṁ≤ 6.9 × 10^-10 M_⊙ yr^-1 for Ku-band. The star χ^1 Ori shows a mass loss rate of Ṁ≤ 1.3 × 10^-10 M_⊙ yr^-1 for C-band at 6 GHz and Ṁ≤ 7.2 × 10^-10 M_⊙ yr^-1 for Ku-band at 14 GHz. Here, we assume that non-thermal emission from coronal radio flares contributes to the total mass loss following <cit.>, implying that these features originate close to the stellar surface propagating through an optically thin wind. Mass loss rates from the spherically symmetric wind and conical wind calculations are similar to these results.For π^1 UMa the mass loss rate is derived to be Ṁ≤ 1.9 ×10^-11 M_⊙ yr^-1 for C-band and Ṁ≤ 5 × 10^-12 M_⊙ yr^-1 for Ku-band for a jet-like wind with an opening angle of 40^∘. For κ^1 Cet the determined mass loss rate for a collimated wind is Ṁ≤ 5.1 × 10^-11 M_⊙ yr^-1 for C-band andṀ≤ 3.5 × 10^-12 M_⊙ yr^-1 for Ku-band.The resulting maximum mass loss rate of π^1 UMa in Ku-band (with α_op = 0.6) is about 250 times stronger than the present day solar mass loss rate of Ṁ = 2 × 10^-14 M_⊙ yr^-1. <cit.> studied the stellar wind and mass loss of π^1 UMa using Ly-α observations. With hydrodynamic models for the astrosphere to infer the stellar wind strength, the study of <cit.> results in a wind for π^1 UMa only half as strong as the solar wind. From their research, <cit.> concluded that the Sun and solar-like stars do not experience particularly strong coronal winds in their past. <cit.> studied coronal mass ejections in connection to stellar winds, where the authors found that coronal mass ejection (CME) induced mass loss rates can amount to several percent of the steady wind rate. Their estimation for a CME mass loss rate for π^1 UMa implies Ṁ∼ 3 × 10^-12 M_⊙ yr^-1, comparable with our upper limits. For κ^1 Cet the mass loss rate in the <cit.> study is similar. We see that the measurements of <cit.> of Ṁ = 0.5Ṁ_⊙ are much lower than the Ṁ = 150Ṁ_⊙ predictions of <cit.> and our observational upper limit. The rotation-wind model by <cit.> in fact also requires a wind mass loss rate significantly above the value suggested by <cit.> to explain the observed spin-down rate for solar-like stars in this age range. Finally, the maximum total solar mass for the young Sun was derived for three cases: spherically symmetric winds, conical jet flows, and rotational evolution models. The results are quite different: 1) the mass loss rate of spherically symmetric winds indicates a total maximum mass of 1.02 M_⊙, 2) conical winds lead to a total mass of 1.004 M_⊙ , and 3) the rotational evolution model suggests an initial solar mass of only 1.0002 M_⊙ at an age of about 100 Myr.§ CONCLUSIONIf the FYSP is to be solved with a larger initial solar mass, Earth and Mars climate constraints require the solar mass to be in the range of 1.03 - 1.07 M_⊙ near the zero-age main-sequence, requiring an enhanced early wind mass loss rate of order 10^-12-10^-10 M_⊙ yr^-1. Our results for mass loss rates derived with radio observations of solar analogues indicate an early solar mass of at most 1.02M_⊙ assuming spherically symmetric winds. This is not sufficient to solve the faint young Sun paradox. It appears that other explanations such as higher concentrations of greenhouse gases and aerosols <cit.>, a lower global albedo, either through less cloud coverage <cit.>, and/or a smaller continental land mass <cit.> are required. We thank the referee, Jeffrey Linsky, for very helpful comments that improved the paper. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. B.F and M.G. acknowledge the support of the FWF "Nationales Forschungsnetzwerk" project S116601-N16 "Pathways to Habitability: From Disks to Active Stars, Planets and Life" and the related FWF NFN subproject S116604-N16 "Radiation and Wind Evolution from the T Tauri Phase to ZAMS and Beyond". Financial support of this project by the University of Vienna is also acknowledged. This publication is supported by the Austrian Science Fund (FWF). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2011.0.01234.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.aa
http://arxiv.org/abs/1702.08393v1
{ "authors": [ "Bibiana Fichtinger", "Manuel Güdel", "Robert L. Mutel", "Gregg Hallinan", "Eric Gaidos", "Stephen L. Skinner", "Christene Lynch", "Kenneth G. Gayley" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170227174104", "title": "Radio emission and mass loss rate limits of four young solar-type stars" }
APS/123-QED Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USAvenkvis@cmu.edu Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USAWe generalize the conditions for stable electrodeposition at isotropic solid-solid interfaces using a kinetic model which incorporates the effects of stresses and surface tension at the interface. We develop a stability diagram that shows two regimes of stability: previously known pressure-driven mechanism and a new density-driven stability mechanism that is governed by the relative density of metal in the two phases. We show that inorganic solids and solid polymers generally do not lead to stable electrodeposition, and provide design guidelines for achieving stable electrodeposition.Stability of Electrodeposition at Solid-Solid Interfaces and Implications for Metal Anodes Venkatasubramanian Viswanathan December 30, 2023 ==========================================================================================Electrodeposition, a process of great practical importance in thin films and metallurgy, has served as a platform for understanding nonequilibrium growth processes and studying morphological instabilities <cit.>.Theoretical and experimental investigations have focused on developing a comprehensive understanding of the origin of morphological instability <cit.> and a rich variety of morphologies including fractal structures have been observed through control of the electrode potential and metal ion concentration <cit.>. The study of dendritic growth during electrodeposition has gained renewed interest in light of their importance in the safety issues associated with dendritic short in current Li-ion batteries <cit.>.Further, controlling the growth of dendrites during electrodeposition could enable the use of metal anodes especially based on Li which could lead to significantly higher energy density batteries <cit.>. Of many possible approaches to control the growth of dendrites, suppression through the use of a solid electrolyte has emerged as the most promising route <cit.>. When the liquid electrolyte in contact with metal electrode is replaced by a solid phase, creating a solid-solid system, the interface properties alter the local kinetics of electrodeposition <cit.>. Monroe and Newman analyzed the interfacial stability of Li/solidpolymer electrolyte system within linear elasticity theory and showed using a kinetic model that solid polymer electrolytes with a sufficient modulus are capable of suppressing dendrite growth <cit.>. However, the propagation of the interface is often accompanied by a change in density of the metal and thus, density is an important order parameter that should affect the stability of electrodeposition at the interface. In the theory of roughening of solid-solid interfaces studied in geological systems, it has been shown that interfacial stability or roughening condition depends the density change at the interface <cit.>. Furthermore, the stability is determined by a subtle interplay between the density, modulus and the Poisson's ratio.In this work, we derive general stability criteria for electrodeposition at solid-solid interfaces using linear stability analysis assuming that the solids are linearly elastic isotropic materials. Based on the stability criteria, we show that there is a new stabilizing mechanism that is determined by density change between the two solids.Our analysis shows that it is possible to use a soft solid electrolyte provided the partial molar density of the metal is greater in the solid electrolyte as compared to the metal anode.This mechanism opens up new ways to suppress dendrite growth at Li electrode/solid electrolyte interfaces.We construct a general stability plot with two parameters, shear modulus ratio and molar volume ratio, and show that two distinct regions of stable electrodeposition are possible.We find that typical inorganic solid electrolytes have higher shear modulus, but lower molar volume than that required for stable electrodeposition, leading to unstable electrodeposition.On the other hand, solid polymer electrolytes have higher molar volume but lower shear modulus than the requirement, leading once again to unstable electrodeposition.Our analysis suggests that a solid electrolyte with a combination of high (low) Li molar volume and high (low) shear modulusis required for stable electrodeposition.We study the system of a metal electrode in contact with a solid containing mobile metal ions (solid electrolyte), as shown in Fig. <ref>. This situation is common in electroplating and during charging at metal anodes in batteries. In this process, M^z+ ions from the electrolyte are reduced and deposited at the metal electrode as metal atoms according to the reaction:M^z++ze^-⇌M. Based on the operating conditions, this process could lead to stable electrodeposition or morphological instabilities due to uneven deposition of metal ions at the electrode surface. To understand the non-equilibrium growth process and its stability, we need to determine the rate of deposition at the interface. We are interested in the initiation of small perturbations at the interface and we will ignore grain boundaries in the solid electrolyte through which these small perturbations may propagate after initiation [The initiation regime has been shown to be most critical for dendrite suppression since dendrites cannot be suppressed if they reach propagation regime. See Ref. Monroe2004Effect and references cited in its introduction.]. Experimental studies have also indicated that solid electrolytes need to be prepared without grain boundaries or interconnected pores using dense electrolyte preparation methods like pressure-assisted sintering in order to function in a battery <cit.>. Nevertheless, we later provide means by which the effect of defects like grain boundaries may be included in the model. The evolution of the metal surface z=f(x,t) can be related to the current density at the interface:∂ f(x,t)/∂ te_z·e_n=-iV_M/z F.where e_n is the unit normal pointing from the metal towards the solid electrolyte, V_M is the molar volume of the metal, F is the Faraday constant and i is the current density normal to the interface. The current density, i, can be related to the surface overpotential η_s through the Butler-Volmer relationship:i/i_0= [ exp(α_a zF η_s/RT)- exp(-α_c zF η_s/RT)].Hereα_a and α_c are the charge transfer coefficients associated with anodic and cathodic reactions and i_0 is the exchange current density. The Butler-Volmer relationship is known to describe electrodeposition processes well for small surface overpotentials and moderate currents <cit.>.In our analysis, we consider a constant metal ion concentration at the interface which is a good approximation for solid electrolytes. A large deviation from the average concentration of metal ion will cause local violation of electroneutrality since the anions are generally fixed, resulting in a large energy penalty<cit.>. Under this assumption, a constant driving force at the interface will result in a uniform surface development without irregularities. However, the local interface geometry affects the driving force for electrodeposition and thereby, the kinetics of metal deposition. Hence it is essential to describe a kinetic relationship that takes into account the local interface geometry. Locally, the electrochemical potential changes due to surface tension and interfacial stresses in a solid. Earlier models used surface tension as the primary stabilizing mechanism against morphological instability. These include the notable works of Mullins and Sekerka on solidification <cit.> and Barton and Bockris on electrochemical systems <cit.>. However, the interfacial stress can have a major influence on the growth morphology in solids <cit.>. More recently, the effect of mechanical stresses have been incorporated into electrochemical problems <cit.>. Here, we will follow the Monroe-Newman approach as it explicitly includes the Butler-Volmer kinetic relationship at the interface. The new kinetic relationship at a deformed interface within this model can be written as:i_deformed/i_undeformed=exp[(1-α_a)Δμ_e^-/RT]where i_undeformed is the current density at an undeformed interface given by Eq. (<ref>) and Δμ_e^- is the change in electrochemical potential of the electron at a deformed interface. It depends on the surface tension and interfacial stresses as <cit.>:Δμ_e^-= -1/2z(V_M + V_M^z+) (-γκ.. + e_n· [(τ_d^e - τ_d^s) e_n]) + 1/2z(V_M - V_M^z+) (Δ p^e + Δ p^s ). Here, V_M^z+ is the molar volume of M^z+ in the solid electrolyte, γ is the surface tension at the interface, κ is the mean curvature at the interface, τ_d^e and τ_d^s are the deviatoric stresses at the electrode and electrolyte sides of the interface,and Δ p^e and Δ p^s are the gage pressures at the electrode and electrolyte sides of the interface. Eq. (<ref>) is obtained by calculating the electrochemical potential change dμ=(∂μ/∂ p) dp and using the equilibrium of Eq. (<ref>). Given the geometry of the interface and material response to resulting strains, it is possible to calculate the local kinetic term and obtain the instantaneous surface growth rate from Eq. (<ref>). A convenient and sufficiently general choice of the initial geometry to study morphological stability is a sinusoidal perturbation of the interface since the equations of motion can be solved analytically in this case <cit.> and any electrode surface geometry can be expanded as a Fourier series. Consistent with a linear stability analysis, the interface at z=0 is perturbed with a perpendicular displacement (i.e. along 𝐞_𝐳) of the form u_z(x,z=0)=Re{Ae^ikx} with A≪ 1. Unlike the Asaro-Tiller formalism <cit.>, the electrochemical potential change due to strain energy density is of second order and can be neglected in our linear stability analysis. The displacements are assumed to vanish far from the interface i.e. lim_z→±∞u(x,z)=0. The traction boundary condition is a tangential force balance at the interface:e_t· [(τ_d^e - τ_d^s)e_n]=0.Using these boundary conditions, bulk force balance: div σ=0, and constitutive laws for a linearly elastic isotropic material with shear modulus G and Poisson's ratio ν, Δμ_e^- can be computed for every point on the interface. When the values of stresses and surface tension are plugged into the Eq. (<ref>), we obtainΔμ_e^- = χRe{Ae^ikx} with χ=χ(G_e,G_s,ν_e,ν_s,γ,k,z,V_M,V_M^z+) [See Supplemental Material for exact expression for χ and critical shear modulus ratio, and calculations of partial molar volume ratio.] . Stable electrodeposition will occur when current density is out of phase with the perturbation. Equivalently, Δμ_e^- should be out of phase with the perturbation (since 1-α_a>0 in Eq. (<ref>)) i.e. χ<0, in which case the the deposition will be faster at the valleys (Acos(kx)<0) than the peaks (Acos(kx)>0), resulting in an even surface growth. Since the sign of χ determines the stability of electrodeposition, hereafter, we refer to χ as the stability parameter. This result is similar to that for stability of a material surface against interface migration encountered in fabrication of epitaxial thin films <cit.>. Eq. (<ref>) shows that Δμ_e^- and hence, χ consists of contributions from surface tension, hydrostatic and deviatoric stresses. The stabilizing or destabilizing nature of the hydrostatic term depends on the sign of V_M^z+-V_M. Therefore, the volume ratio v=V_M^z+/V_M is an important order parameter of the electrodeposition problem. A hydrostatically stressed interface will inhibit growth of dendrites when v>1 such as in polymers and viscoelastic liquids with high elastic response, and considerable ion-solvent interactions <cit.>. On the other hand, the hydrostatic stress term will be destabilizing for v<1 and this is generally the case for inorganic solid electrolytes as we will show later. Fig. <ref> shows hydrostatic and deviatoric contributions to χ for (a) v>1 and (b) v<1 as a function of the ratio G_s/G_e with Li metal as the electrode. In (a), the hydrostatic contribution is initially positive (destabilizing) and monotonically decreasing with G_s/G_e which results in stability when G_s/G_e≳ 2.2 when this term starts to dominate the stability parameter. The scenario reverses for (b) where the hydrostatic stress term is initially negative (stabilizing) and monotonically increasing resulting in stability for G_s/G_e≲ 0.7. It is worth noting that the deviatoric stress term is always destabilizing. The surface tension term is very small (<0.2 kJ/mol·nm) at the wave numbers of perturbation of interest and has been ignored in further analysis. However, techniques likenanostructuring the interface <cit.> might enhance its contribution to the stability parameter. The results from Fig. <ref> show that for v>1, there exists a critical shear modulus ratio beyond which the electrodeposition is stable (χ<0).This is previously known from the work of Monroe and Newman <cit.> and later observed experimentally <cit.>.For v<1, a previously unexplored regime in the context of electrodeposition, stability is achieved below the critical shear modulus ratio. The existence of density-driven stability may be understood in terms of the dependence of stability parameter χ on the hydrostatic term alone since the deviatoric term is always destabilizing. χ characterizes the electrochemical potential change of the electron at a peak in the interface (Δμ_e^-=Aχ when cos(kx)=1). For v<1, the hydrostatic term in Eq. (<ref>) is stabilizing when Δ p^e + Δ p^s is negative. Due to elongation of the electrode at a peak, there will be tensile stress generated at the electrode side of the interface and compressive at the electrolyte side. Hence Δ p^e<0 and Δ p^s>0. Since G is a measure of the stress response to strain, when G_s≪ G_e, |Δ p^s|≪ |Δ p^e| which will make this term stabilizing at low G_s/G_e. A similar argument explains the stable region on the top right. Thus, the stable regimes at the bottom left and top right in Fig. <ref> are guaranteed to exist.Based on the obtained criteria, we construct a stability diagram as shown in Fig. <ref> with the shear modulus ratio and the molar volume ratio as the critical parameters. The electrode material used for generating the stability diagram is Li metal. The stability diagram has four regions out of which two are stable and two are unstable. The two stable regions lie on the top right and bottom left of the stability diagram. For v>1, a solid electrolyte with shear modulus larger than the critical shear modulus is required for stable electrodeposition.In fact, the required shear modulus increases sharply as the molar volume ratio approaches unity, reducing the stability window.The second region of stability emerges for v<1, which shows that it is possible to stabilize electrodeposition using a soft solid electrolyte provided Li in the solid electrolyte is more densely packed than Li in Li metal. We therefore term this stability mechanism as density-driven. Beyond v=1, stability requires the hydrostatic part of stress to dominate the stability parameter and hence, the stability in this region is called pressure-driven. This stability diagram qualitatively resembles the stability diagram for stress-driven phase transition at solid-solid interfaces studied by Angheluta et al. <cit.>. In case of stress-driven phase transition, the interplay between the work term and elastic energy term determines the growth and stability of the interface. Analogously, it is the hydrostatic stress term, competing with the deviatoric stress term in electrodeposition.This analysis raises the important question of where real solid electrolytes lie in this stability diagram.This depends critically on the value of v in solid electrolytes. Marcus and Hefter have tabulated the values of partial molar volumes of cations in a range of solvents <cit.>. Following their work, for liquid and polymer electrolytes, the partial molar volume of the ion can be written as: V=V_int+V_el+V_cov+V_str, where the four terms correspond to intrinsic volume, and changes in the volume due to electrostriction, short-range interactions and size, shape and structure of solvent molecules. In crystalline solid electrolytes, the last three terms vanish and the partial molar volume is just the intrinsic volume of the ion in the crystal. We used the values of ionic radii tabulated by Shannon <cit.> and Marcus et al. <cit.> (details in Supplemental Material <cit.>). The values of the unit cell volume of solid electrolytes were obtained from<cit.> and the shear modulus from previous work on elastic properties of solid electrolytes <cit.> whenever available or fromdatabase <cit.>.As shown in Fig. <ref>, we find that typical inorganic solid electrolytes have a molar volume ratio, v<1 and possess a shear modulus higher than the critical shear modulus below which electrodeposition is stable.As a result, Li-solid electrolyte interfaces based on these materials will result in unstable electrodeposition. Compounds in which Li has oxidation state of zero, like alloys of Li with Sn (not shown in Fig. <ref>), generally have a molar volume ratio closer to 1. Solid polymer electrolytes generally have v>1 but their shear moduli are generally lower than the critical value, resulting in unstable electrodeposition. Our analysis identifies a fundamental trade-off that needs to be broken if stable electrodeposition is expected for solid polymer or inorganic solid electrolytes. We note that the properties at the interface might change due to chemical reactions occurring at the reductive potentials of the anode. For example, different Li alloys might be formed at the interface depending on the composition of the solid electrolyte. In such cases, the effective properties at the interface must be used to determine stability.Possible schemes for stable electrodeposition at metal-solid electrolyte interfaces rely on control of the shear modulus of the solid electrolyte or partial molar volume. An approach could be to alter the partial molar volume of Li in low shear modulus materials by tuning ion-solvent interactions so that they fall in the bottom left stable region on the stability diagram. Altering the shear modulus of the material is a much more difficult task requiring the use of strengthening mechanisms. Molten salts and ionic liquids with an elastic mechanical response that corresponds to low shear modulus could lie in the density-driven stability region. Finally, although the effect of defects like grain boundaries has not been included here, their effect may be included by determining the change in electrochemical potential of the components in Eq. (<ref>). This will add a new term to Δμ_e^- in Eq. (<ref>).In conclusion, we have explored the role of mechanics at solid-solid interfaces in determining electrodeposition stability. We show that two separate mechanisms of electrodeposition stability are possible: pressure-driven stability at high molar volume ratio and density-driven at lower molar volume ratio. These appear as two distinct regions in the stability diagram. Using these insights, we analyze candidate Li solid electrolytes, and show that materials re-engineering of the interface is required for stable electrodeposition. Z. A. acknowledges support from the Advanced Research Projects Agency-Energy Integration and Optimization of Novel Ion Conducting Solids (IONICS) program under Grant No. . Z. A. and V. V. gratefully acknowledge support from the U.S. Department of Energy, Energy Efficiency and Renewable Energy Vehicle Technologies Office under Award No. . § APPENDIXThe stability parameter χ is obtained from the equation Δμ_e^- = χRe{Ae^ikx}. Since Δμ_e^- consists of surface tension, hydrostatic and deviatoric stress terms, χ can be broken down into three terms: χ=-γk^2 (V_M + V_M^z+)/2 z_surface tension +2G_e G_s k(V_M + V_M^z+) (ν_e (4 ν_s-3)-3 ν_s+2)/z (G_e (ν_e-1) (4 ν_s-3)+G_s (4 ν_e-3) (ν_s-1))_deviatoric stress+k(V_M - V_M^z+) (G_e^2 (4 ν_s-3)+G_s^2 (3-4 ν_e))/2 z (G_e (ν_e-1) (4 ν_s-3)+G_s (4 ν_e-3) (ν_s-1))_hydrostatic stress The critical shear modulus ratio G_s/G_e above (for v>1) or below which (for v<1) the electrodeposition is stable can be calculated by setting the stability parameter χ to zero, and is given by:G_s/G_e=B+√(D)/(v-1)(4ν_e-3), ifv≤ 1 B-√(D)/(v-1)(4ν_e-3), ifv> 1where B=-4 - 4 v + 6 ν_e + 6 v ν_e + 6 ν_s + 6 v ν_s - 8 ν_e ν_s - 8 v ν_e ν_s and D=(v-1)^2 (-3 + 4 ν_e) (-3 + 4 ν_s) + 4 (v+1)^2 (2 - 3 ν_s + ν_e (-3 + 4 ν_s))^2.Supplemental Material: Stability of Electrodeposition at Solid-Solid Interfaces and Implications for Metal Anodes § STABILITY CRITERIAThe stability parameter χ is obtained from the equation Δμ_e^- = χRe{Ae^ikx}. Since Δμ_e^- consists of surface tension, hydrostatic and deviatoric stress terms, χ can be broken down into three terms: χ= -γk^2 (V_M + V_M^z+)/2 z_surface tension+2G_e G_s k(V_M + V_M^z+) (ν_e (4 ν_s-3)-3 ν_s+2)/z (G_e (ν_e-1) (4 ν_s-3)+G_s (4 ν_e-3) (ν_s-1))_deviatoric stress+k(V_M - V_M^z+) (G_e^2 (4 ν_s-3)+G_s^2 (3-4 ν_e))/2 z (G_e (ν_e-1) (4 ν_s-3)+G_s (4 ν_e-3) (ν_s-1))_hydrostatic stress The critical shear modulus ratio G_s/G_e above (for v>1) or below which (for v<1) the electrodeposition is stable can be calculated by setting the stability parameter χ to zero, and is given by:G_s/G_e=B+√(D)/(v-1)(4ν_e-3), ifv≤ 1 B-√(D)/(v-1)(4ν_e-3), ifv> 1where:B=-4 - 4 v + 6 ν_e + 6 v ν_e + 6 ν_s +6 v ν_s - 8 ν_e ν_s - 8 v ν_e ν_s D=(v-1)^2 (-3 + 4 ν_e) (-3 + 4 ν_s) +4 (v+1)^2 (2 - 3 ν_s + ν_e (-3 + 4 ν_s))^2.As mentioned in the text, the surface tension term was ignored due to to its small contribution at wave number of interest.§ CALCULATION OF PARTIAL MOLAR VOLUME RATIOThe intrinsic volume of the ion in a binary solid electrolyte M_pX_q can be said to follow the additivity of volumes <cit.>:V_total=pV_M^q++qV_X^p-. Further, we assumed the ratio of volumes occupied by each ion to follow:V_M^q+/ V_X^p- = r^3_M^q+/r^3_X^p-.where r is the ionic radius of the respective ion tabulated by Shannon <cit.> for monoatomic ions and by Marcus et al. <cit.> for multiatomic species like PO_4^3-. This equation can be extended to alloys with ionic radius replaced by the atomic radius.
http://arxiv.org/abs/1702.08406v2
{ "authors": [ "Zeeshan Ahmad", "Venkatasubramanian Viswanathan" ], "categories": [ "cond-mat.mtrl-sci", "physics.chem-ph" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170227180832", "title": "Stability of Electrodeposition at Solid-Solid Interfaces and Implications for Metal Anodes" }
Sparsity Constrained Split Feasibility for Dose-Volume Constraints in Inverse Planning of Intensity-Modulated Photon or Proton Therapy Scott Penfold^1,2† The contributions of the first two authors to this work are of equal shares., Rafał Zalas^3†, Margherita Casiraghi^4 Mark Brooke^2, Yair Censor^5, Reinhard Schulte^6 ^1Department of Medical Physics, Royal Adelaide Hospital Adelaide, SA 5000, Australia ^2Department of Physics,University of Adelaide, Adelaide, SA 5005, Australia ^3Department of Mathematics, The Technion, Technion City, Haifa 32000, Israel ^4Paul Scherrer Institute, Center for Proton Therapy (CPT) Switzerland ^5Department of Mathematics, University of Haifa Mt. Carmel, Haifa 3498838, Israel ^6Department of Basic Sciences, School of MedicineLoma Linda University, Loma Linda, CA 92354, USA.(scott.penfold@sa.gov.au)July 10, 2016; Revised January 10, 2017 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== A split feasibility formulation for the inverse problem of intensity-modulated radiation therapy (IMRT) treatment planning with dose-volume constraints (DVCs) included in the planning algorithm is presented. It involves a new type of sparsity constraint that enables the inclusion of a percentage-violation constraint in the model problem and its handling by continuous (as opposed to integer) methods. We propose an iterative algorithmic framework for solving such a problem by applying the feasibility-seeking CQ-algorithm of Byrne combined with the automatic relaxation method (ARM) that uses cyclic projections. Detailed implementation instructions are furnished. Functionality of the algorithm was demonstrated through the creation of an intensity-modulated proton therapy plan for a simple 2D C-shaped geometry and also for a realistic base-of-skull chordoma treatment site. Monte Carlo simulations of proton pencil beams of varying energy were conducted to obtain dose distributions for the 2D test case. A research release of the Pinnacle^3 proton treatment planning system was used to extract pencil beam doses for a clinical base-of-skull chordoma case. In both cases the beamlet doses were calculated to satisfy dose-volume constraints according to our new algorithm. Examination of the dose-volume histograms following inverse planning with our algorithm demonstrated that it performed as intended. The application of our proposed algorithm to dose-volume constraint inverse planning was successfully demonstrated. Comparison with optimized dose distributions from the research release of the Pinnacle^3 treatment planning system showed the algorithm could achieve equivalent or superior results. Keywords: dose-volume constraints, intensity-modulated radiation therapy, sparsity constraints, split feasibility, the CQ-algorithm, inverse planning, automatic relaxation method.§ INTRODUCTIONIntensity-modulated radiation therapy (IMRT) with photons or intensity-modulated proton therapy (IMPT) are rapidly evolving techniques for planning and delivering radiation therapy to solid tumors. For many tumor sites, IMRT with photons has superseded standard radiation therapy (RT) techniques and is becoming the new standard in RT delivery <cit.>. At existing proton centers, IMPT in combination with active pencil beam scanning is increasingly being used, replacing older passively scattered and collimated proton therapy techniques as a means for more accurately delivering high doses to the target volume and sparing of organs at risk (OARs) as indicated by dosimetric studies <cit.>. Instead of using a single upper dose bound for OARs and single lower dose bound for the target volumes, it has become a common practice in clinical trials and off-trial photon IMRT treatments to specify more than one dose-volume constraint (DVC), allowing a certain percentage of volume to violate to a certain extent a given bound. This additional DVC, which could be single or multiple, rely on accumulated clinical experience with conformal RT techniques. For example, Gulliford et al. <cit.> performed a detailed dose-volume analysis of the incidence of clinically relevant late rectal toxicities in patients treated with high-dose photon IMRT for prostate cancer and found that the incidence of moderate-to-severe rectal toxicity for any of six late-toxicity endpoints decreased incrementally for patients whose treatment plans met increasing numbers of DVCs from the set of V_30≤80%, V_40≤65%, V_50≤55%, V_60≤40%, V_65≤30%, V_70≤15%, and V_75≤3%. Here, V_X ≤ Y corresponds to a dose-volume constraint that Y% of the volume cannot receive more than X Gy. These and similar DVCs for OARs have found their way into clinical trial protocols and practice guidelines over the years, see, e.g., <cit.>.Most modern inverse planning algorithms attempt to incorporate DVCs by defining sub-volumes with different dose objectives applied to each sub-volume. The multiple objectives are then combined into a single cost function to be minimized. Minimization in RT inverse planning with DVCs has been performed with a number of different approaches. Spirou and Chui <cit.> used gradient descent to seek a vector of ray intensities that minimized a cost function representing the sum of all dose constraints violations. However, incorporating DVCs directly into the cost function of the minimization process often renders the objective function non-convex and non-differentiable. This has the disadvantage of potentially resulting in local minima and thereby sub-optimal treatment plans. Cho et al. <cit.> used a similar concept but applied simulated annealing for minimization. Simulated annealing is less susceptible to non-convexity and non-differentiability but is less computationally efficient than gradient descent. Romeijn et al. <cit.> adopted a linear programming approach to handle what they called partial-volume constraints. However to make the problem tractable for computation, they replaced the familiar concept of DVCs by a closely related, but not identical notion of conditional value-at-risk (C-VaR). Zhang and Merritt <cit.> proposed a new least-squares model to handle DVCs while retaining differentiability at the expense of having to deal with a nested double minimization problem. Therefore, an inverse planning algorithm for DVCs that is computationally efficient, robust to non-convexity and non-differentiability yet without simplifying the problem statement has yet to be developed.In the current work, feasibility-seeking methods, as opposed to minimization algorithms, are applied to RT inverse planning with DVCs. Within the proposed feasibility-seeking approach issues of convexity and differentiability of the cost function do not arise at all because no cost function is used. While the DVCs do require a constraint that is not convex (the sparsity-norm constraint set), we are able to incorporate it into the projection method that we use to solve the feasibility-seeking problem. This is possible because we have devised a way to calculate the projection onto this set in spite of it being non-convex.Another general advantage of the feasibility-seeking approach has to do with the availability of a class of highly efficacious feasibility-seeking projection methods. These methods refer to iterative algorithms that use projections onto sets while relying on the general principle that when a family of, usually closed and convex, constraints sets is present, then projections onto the individual sets are easier to perform than projections onto other sets (intersections, image sets under some transformation, etc.) that are derived from the individual sets. Furthermore, projection methods may have algorithmic structures that are particularly suited for parallel computing, such as block-iterative projections (BIP) or string-averaging projections (SAP). They also demonstrate desirable convergence properties and good initial behavior patterns. See, for example, the 1996 review <cit.>, the recent annotated bibliography of books and reviews <cit.> and its references, and <cit.>.We recently showed that IMPT inverse planning is possible with a fully-discretized, feasibility-seeking approach by iteratively projecting solution vectors in the beam intensity vector space onto half-spaces representing dose constraints in target and OAR volumes <cit.>. In our preliminary work, we demonstrated that with these iterative projection algorithms, feasible solutions meeting the planning objectives can be found that meet target and normal tissues dose bounds, in particular, if the constraints are not too challenging and/or the treatment modality is very conformal (e.g., by using protons).In this paper, we use the fully-discretized feasibility-seeking approach applicable to either photon IMRT or IMPT inverse planning which leads to a mathematical feasibility problem. The upper and lower bounds on the doses to the various structures define the linear inequality constraints of the feasibility problem, which is solved by feasibility-seeking projection methods without attempting to minimize any cost function. Within this setup, we propose and investigate a novel method for allowing the feasibility-seeking inverse planning algorithm to automatically account for DVCs.In the next section, we rigorously define the notion of percentage-violation constraint (PVC), which does not seem to have been used in the mathematical optimization community until now. A PVC injects integers into the problem which makes it difficult to solve. To circumvent this difficulty, we reformulate the PVC with the aid of a sparsity norm that counts the number of non-zero entries in a vector. This enables us to replace the original feasibility problem with PVC by another feasibility problem that includes non-convex constraints for the sparsity norm. For the resulting feasibility problem with this non-convex sparsity norm induced constraint we develop a new iterative projection algorithm which is a combination of the CQ-algorithm <cit.> and the automatic relaxation method (ARM) <cit.>. § METHODS §.§ Linear feasibility with percentage-violation constraints Given p closed convex subsets Q_1,Q_2,⋯,Q_p⊆ R^n of the n-dimensional Euclidean space R^n, expressed as level setsQ_j={x∈ R^n| f_j(x)≤ v_j}, for all j∈ J:={1,2,…,p},where f_j:R^n→ R are convex functions and v_j are some given real numbers, the convex feasibility problem (CFP) is to find a point x^∗∈ Q:=∩_j∈ JQ_j. If Q=∅ where ∅ is the empty set then the CFP is said to be inconsistent.Denoting the inner product of two vectors in R^n by ⟨ a,b⟩:=∑_i=1^na_i b_i, we consider the following linear feasibility problem (LFP) with percentage-violation constraint (PVC). Linear Feasibility with Percentage-Violation Constraint (PVC). Given a CFP as in (<ref>) with f_j(x)= ⟨ a^j,x⟩ and two real numbers 0≤α≤1 and 0<β<1, find a vector x^∗ that solves the system⟨ a^j,x⟩≤ v_j, for all j∈ Jsubject to the additional PVC constraint that:𝐈𝐧 𝐮𝐩 𝐭𝐨 𝐚𝐟𝐫𝐚𝐜𝐭𝐢𝐨𝐧 α(𝐢.𝐞.,100α%)𝐨𝐟 𝐭𝐡𝐞 𝐭𝐨𝐭𝐚𝐥 𝐧𝐮𝐦𝐛𝐞𝐫𝐨𝐟𝐢𝐧𝐞𝐪𝐮𝐚𝐥𝐢𝐭𝐢𝐞𝐬 𝐢𝐧 (<ref>) 𝐭𝐡𝐞 𝐫𝐢𝐠𝐡𝐭 -𝐡𝐚𝐧𝐝 𝐬𝐢𝐝𝐞 𝐛𝐨𝐮𝐧𝐝𝐬v_j𝐦𝐚𝐲 𝐛𝐞 𝐩𝐨𝐭𝐞𝐧𝐭𝐢𝐚𝐥𝐥𝐲 𝐯𝐢𝐨𝐥𝐚𝐭𝐞𝐝𝐛𝐲 𝐮𝐩 𝐭𝐨 𝐚𝐟𝐫𝐚𝐜𝐭𝐢𝐨𝐧 β (𝐢.𝐞.,100β%)𝐨𝐟𝐭𝐡𝐞𝐢𝐫𝐯𝐚𝐥𝐮𝐞𝐬. A PVC is an integer constraint by its nature. It changes the CFP to which it is attached from being a continuous feasibility problem into becoming a mixed integer feasibility problem. In the field of intensity-modulated radiation therapy (IMRT) treatment planning dose-volume constraints (DVCs) are traditionally used to evaluate treatment plans. DVCs are percentage-violation constraints but without properly incorporating them into the algorithm itself it is not possible to a priori guarantee that a solution will indeed obey them.In this paper we propose a novel way to incorporate PVCs via the notion of a sparsity norm and derive a tractable model and algorithmic approach, along with detailed implementation instructions for using it, to solve DVCs feasibility problems for inverse planning in IMRT. §.§ IMRT problem statementWe consider the following linear interval feasibility problem (LIFP) which is the basic model for the inverse problem in the fully-discretized approach to IMRT treatment planning <cit.>: Linear Interval Feasibility: the basic model for the inverse problem in the fully-discretized approach to IMRT treatment planning. Find x^∗∈ R^n for which the following hold:0≤ A_1x≤ b^1,b^3≥ A_2x≥ b^2,0≤ A_3x≤ b^4,x≥0,where A_1∈ R_+^m_1× n, A_2∈ R_+^m_2× n, A_3∈ R_+^m_3× n are given matrices, b^1∈ R_+^m_1, b^2,b^3∈ R_+^m_2, b^4∈ R_+^m_3 are given vectors. (The subscript + denotes the nonnegative orthant.) In IMRT the row inequalities of (<ref>) represent voxels of an organ at risk (OAR) whose permitted absorbed doses should not exceed b_t^1 for each voxel t in this structure. The row inequalities of (<ref>) represent voxels of another OAR whose permitted absorbed doses should not exceed b_t^4 for each voxel t in this structure. The row inequalities of (<ref>) represent voxels of a planning target volume (PTV) whose permitted absorbed doses should be above b_t^2, but should not exceed b_t^3, for each voxel t in this structure.Our tool to “translate” the integer constraint (<ref>) into a “continuous” one is the notion of sparsity norm, called elsewhere the zero-norm, of a vector x∈ R^n which counts the number of nonzero entries of x, that is,‖ x‖_0:=|{x_i| x_i≠0}| ,where |·| denotes here the cardinality, i.e., the number of elements of a set. This notion has been recently used for various purposes in compressed sensing, machine learning and more. The “lower +operation” on a vector x∈ R^n means that, for all i=1,2,…,n,(x_+)_i:=max(0,x_i)={[ x_i, if x_i>0,; 0, if x_i≤0. ]..Obviously, x_+ is always a component-wise nonnegative vector. Hence, ‖ x_+‖_0 counts the number of positive entries of x and is defined by‖ x_+‖_0:=|{x_i| x_i>0}|. To incorporate a DVC related to (<ref>) into the LIFP of Problem <ref> we formulate another feasibility problem as follows. Linear Interval Feasibility with DVC for the inverse problem in the fully-discretized approach to IMRT treatment planning. Find x^∗∈ R^n for which0≤ A_1x≤(1+β)b^1,b^3≥ A_2x≥ b^2,0≤ A_3x≤ b^4,x≥0, ‖(A_1x-b^1)_+‖_0≤α m_1,where A_1, A_2, A_3, b^1, b^2, b^3 and b^4 are as in (<ref>)–(<ref>), and β>0 and α∈0,1] are given real numbers. In this problem (<ref>) allows the doses to voxels of this structure to “overflow” by β. (<ref>) represents an OAR to which we do not attach a DVC for now. (<ref>) represents a PTV to which we do not attach a DVC for now. (<ref>) are the nonnegativity constraints on the solution vector of intensities.The novelty of the model lies in (<ref>). It says that since we demanded originally A_1x≤ b^1 in (<ref>) we must look at the “plussed difference vector” (A_1 x-b^1)_+. It is nonnegative and has a nonzero component exactly and only in components that belong to row inequalities in (<ref>) for which (<ref>) is violated.The zero-norm of (A_1x-b^1)_+ is thus equal to the number of those violations and (<ref>) restricts this number to be not greater than α m_1 where m_1 is the total number of row inequalities (i.e., voxels) in the OAR described by (<ref>). Thus, (<ref>) guarantees that the number of violations up to β in (<ref>) remains at bay under the number α m_1. In the following we propose to use an efficient iterative projections method to solve Problem <ref>. §.§ Projection methods for feasibility-seeking Projections onto sets are used in a wide variety of methods in optimization theory but here projection methods refer to iterative algorithms that use projections onto sets while relying on the general principle that when a family of, usually closed and convex, sets is present, then projections onto the given individual sets are easier to perform than projections onto other sets (intersections, image sets under some transformation, etc.) that are derived from the given family of individual sets.Projection methods may have different algorithmic structures, such as block-iterative projections (BIP) or string-averaging projections (SAP) of which some are particularly suitable for parallel computing, and they demonstrate nice convergence properties and/or good initial convergence patterns. This class of algorithms has witnessed great progress in recent years and its member algorithms have been applied with success to many scientific, technological and mathematical problems. See, e.g., the 1996 review <cit.>, the recent annotated bibliography of books and reviews <cit.> and its references, the excellent book <cit.>, or <cit.>.For the LIFP of Problem <ref> one can use any of a variety of projection methods to handle linear inequality constraints. The most famous of those might be the Agmon-Motzkin-Schoenberg (AMS) cyclic feasibility-seeking algorithm <cit.>. In this paper we adopt a projection method of a particular nature, namely, the automatic relaxation method (ARM) for solving interval linear inequalities of <cit.>.ARM has two advantages over other projection methods applicable to this problem: (i) it handles in each iteration an interval constraint and does not need to handle the right-hand side and left-hand side inequalities of an interval separately, (ii) additionally, it automatically implements a relaxation strategy for the projections which takes into account how far from the hyperslab, defined by an interval constraint, is the point that needs to be projected on it and automatically and continuously adjusts the relaxation parameter for the projection accordingly. The ARM generalizes the algebraic reconstruction technique ART3 <cit.> and is further discussed in Subsection 5.10 of Censor and Zenios <cit.>. §.§ Algorithmic approach First we observe that Problem <ref> is a split feasibility problem. Split feasibility problems were introduced first in <cit.> and further studied in <cit.> and many other publications. The constraints (<ref>)–(<ref>) can be collectively described by c≤ Ax≤ b, where A is an (m_1+m_2+m_3)× n matrix composed from blocksA:=( c]l A_1 A_2 A_3),b is an (m_1+m_2+m_3) vector given byb:=( c]l (1+β)b^1 b^3 b^4),and c is an (m_1+m_2+m_3) vector given byc:=( c]l 0 b^2 0),and they, along with (<ref>) all reside in the space R^n of intensity vectors x. On the other hand, the sparsity constraint (<ref>) takes place in the space R^m_1 where the vectors of doses in the OAR (<ref>) are, namely, the vector b^1 and the vectors y=A_1x. Therefore, we must use not plain feasibility-seeking methods but feasibility-seeking methods for split feasibility problems.In the space R^n of intensity vectors we define the setC:={x∈ R^n| c≤ Ax≤ b}∩ R_+^nwhere A, b and c are as in (<ref>), (<ref>) and (<ref>), respectively, and R_+^n is the nonnegative orthant of R^n. In R^m_1, the space of dose vectors of the OAR structure represented by (<ref>), we define the setQ:={y∈ R^m_1|‖(y-b^1)_+‖_0≤α m_1}with b^1 and α m_1 as in (<ref>). If a point y=A_1x is in Q then it is guaranteed to fulfil (<ref>). So, our split feasibility problem is to find a point x^∗∈ C such that A_1 x^∗∈ Q, precisely describing Problem <ref> above.Common feasibility or split feasibility problems deal with convex sets but here we observe that Q is not a convex set. However, we show below how to project onto it orthogonally, thus enabling to use a feasibility-seeking projection method for our Problem <ref>.To solve the split feasibility formulation of Problem <ref> we propose to use the CQ-algorithm <cit.> for the sets C and Q given by (<ref>) and (<ref>), respectively. It has the advantage that it does not require to calculate the inverse A_1^-1 of A_1 in order to “go back” from R^m_1 to R^n within the iterative process. Instead, it uses the transposed matrix A_1^T which is readily available. The CQ-algorithm <cit.> is in fact a projected Landweber method for the split feasibility formulation of Problem <ref>.In the sequel P_Ω(z) denotes an orthogonal projection of a vector z onto a set Ω. All data quantities mentioned below are as in Problem <ref>. Since Q is not a convex set there might be more than one point for P_Q in (<ref>) below, therefore, the symbol ∈ therein means that x^k+1 could be any projection point onto Q of the vector in the parentheses whose projection onto Q is sought after, and can be arbitrarily chosen from those if more then one exists.Next we explain how to do the projections onto C and onto Q, and how to choose the parameter γ in (<ref>). Since Q of (<ref>) is not convex, the projection P_Q may by multivalued. Nevertheless, for any z∈ R^m_1, we can calculate P_Q(z) by using the following formula P_Q(z)=P_Q(z-b^1)+b^1whereQ:={y∈ R^m_1|‖ y_+‖_0≤α m_1}.Hence the projection of a point z∈ R^m_1 onto the set Q of (<ref>) is obtained by projecting the shifted point (z-b^1) onto the set Q and adding b^1 to the result. The proof of this fact can be found in the Appendix.Therefore, the problem reduces to computing a projection onto Q. This is done according to the following recipe: First count how many components of (z-b^1) are positive, say ℓ. Then,P_Q(z-b^1)={[(z-b^1), If ℓ≤α m_1,;w, If ℓ>α m_1, ].where w is the vector obtained from (z-b^1) by replacing its ℓ-α m_1 smallest positive components by zeros and leaving the others unchanged. If ℓ≤α m_1 then the point (z-b^1) is already inside Q, thus P_Q(z-b^1)=(z-b^1). We will use the above for z= A_1x^k in (<ref>).Following the seminal CQ-algorithm <cit.>, designed for the case when both sets C and Q are convex, we propose that the parameter γ in (<ref>) will be user-chosen from the open interval 0<γ<2/θ where θ is pre-calculated once. To do so we employ <cit.> by using the squared Frobenius matrix norm ‖ A_1‖_F^2 and definingθ:=‖ A_1‖_F^2=∑_i=1^m_1 ∑_j=1^n | a_ij| ^2,where for i=1,2,…,m_1 and j=1,2,…,n, the entries of A_1 are a_ij.In the practical implementation we replace the projection onto C (<ref>) by a sequence of projections onto the individual inequalities of the constraints (<ref>)–(<ref>) that are collectively described by c≤ Ax≤ b with where A, b and c are as in (<ref>), (<ref>) and (<ref>), respectively, according to a feasibility-seeking projection method of our choice. All of the above leads to our proposed Dose-Volume Split-feasibility (DVSF) Algorithm.Algorithm <ref> is a general scheme that is made specific by choosing a feasibility-seeking projection method to be used in its Step 3. Consult Bauschke and Borwein <cit.> for a review of such algorithms, see Censor and Cegielski <cit.> for an annotated bibliography of books and reviews on the subject and Censor et al. <cit.> for a review with experimental results.We adopted here the automatic relaxation method (ARM) for feasibility-seeking <cit.>. We give a generic description of this algorithm by considering the problem of solving iteratively large and possibly sparse systems of interval linear inequalities of the formw_j ≤ ⟨ a^j,x⟩≤ v_j, j=1,2,...,p,where a^j∈ R^n are given, for all j, and w=(w_j)∈ R^p, and v=(v_j)∈ R^p are given too. Assuming that the system is feasible, an x^∗∈ R^n which solves (<ref>) is required. Geometrically, the system represents p nonempty hyperslabs in R^n, each being the nonempty intersection of a pair of half-spaces. If we are willing to ignore the slabs structure of the problem it could be addressed as a system of 2p linear one-sided inequalities and solved by the Agmon-Motzkin-Schoenberg (AMS) algorithm <cit.>. The ARM takes advantage of the interval structure of the problem by handling in every iterative step a pair of inequalities and it also realizes a specific relaxation principle (see <cit.> for details) in an automatic manner. External relaxation parameters are available on top of the built-in automatic relaxation principle.For every hyperslab of the system (<ref>) denote byH_j:={x∈ R^n | ⟨ a^j,x⟩ =v_j} and H_j:={x∈ R^n | ⟨ a^j,x⟩=w_j}its bounding hyperplanes. The median hyperplane will beH_j:={x∈ R^n | ⟨ a^j,x⟩=1/2(v_j+w_j)},and the half-width ψ_j of the hyperslab isψ_j=v_j-w_j/2∥ a^j∥,where ∥ . ∥ stands for the Euclidean 2-norm. The signed distance of a point z∈ R^n from the j-th median hyperplane H_j is given byd(z,H_j)=⟨ a^j,z⟩-1/2(v_j+w_j )/∥ a^j∥.Denoting d_j(k):=d(x^k,H_j(k)), the automatic relaxation method is as follows. §.§ Performance TestingPerformance tests with two different geometries were carried out to verify the functionality of the proposed algorithmic structure for IMRT. Applications to IMPT are presented in the current work. However, the algorithm is not proton specific, and is equally applicable to any form of IMRT. Only the values of the matrix A differ when different forms of radiation are used.§.§.§ Simplified 2D C-shaped geometryA 2D test geometry was defined to simulate an axial cross-section of a tumour volume surrounding an organ at risk. The test geometry is illustrated in Figure <ref>. Structure pixels were defined with a resolution of 1 mm, also coinciding with the dose grid.A proton pencil beamlet spacing of 2 mm, evenly distributed throughout the PTV structure, was used. Three beam angles were used to deliver dose to the PTV area. Each beam contained 146 proton pencil beamlets. The dose deposited by each pencil beamlet in the dose grid was calculated with the Monte Carlo toolkit Geant4 <cit.> and recorded in a text file.The simulated beamlets were uniform circular proton beams of 2 mm diameter. A pre-absorber made of 5.5 cm of polyethylene was inserted in the beams at 5 cm in front of the irradiated geometry in order to smooth the Bragg peaks and avoid dose distribution ripples due to beamlet spacing. The beamlet energies for each aiming point were extracted from a calibration curve. The energy used ranged from 118.5 MeV to 153 MeV with a resolution of 0.5 MeV. The material of all the structures of the irradiated geometry was assumed to be water.The standard electromagnetic physics (G4EmStandardPhysics) and hadron physics models (G4HadronPhysicsQGSP_BIC_HP) were used for proton tracking. Hadron elastic scattering physics, stopping physics, ion physics and decays models were also activated. A range cut of 0.1 mm was set for all particles. For each beamlet, 10^6 events were simulated and the mean absorbed dose per proton was calculated at each pixel of the dose grid.A series of dose-volume constraints (DVCs) were defined to verify the functionality of the algorithm. These included: * dose only constraints (DOCs) applied to both the PTV and OAR structures * a single DVC associated with a single structure (the OAR structure) * multiple (two) DVCs associated with a single structure (the OAR structure) * DVCs associated with multiple structures (the PTV structure and the OAR structure) At this point it is instructive to reconcile the dose-volume terminology used in the current work and the terminology commonly used in the literature. Let us consider an example where a prescription has been made to an OAR such that only 20% of the volume can receive more than 40 Gy and none of the volume can receive more than 50 Gy. Using the terminology of the current work, this would correspond to values of α = 0.2, b_1 = 40 Gy, and β = 0.25 in Problem <ref>. Using the common terminology, this would correspond to D_20%≤ 40 Gy and D_max = 50 Gy.Table <ref> lists the combination of DVCs enforced in the current work, using the common terminology of IMRT DVCs. The dose prescriptions and percentage volume violations were chosen to allow for a demonstration of the functionality of the algorithm.The initial pencil beam intensity vector before inverse planning was set to unity. The dose distribution resulting from the initial intensity vector is shown in Figure <ref>(a). The proposed algorithm was run for 2000 cycles for each prescription listed in Table <ref>. In this terminology, one cycle corresponds to one complete processing of all DVCs and DOCs applied to each pixel within both the PTV and OAR structures.§.§.§ Clinical 3D geometryIn keeping with the 2D geometry, a base of skull chordoma IMPT treatment plan was chosen due to the challenging constraints imposed by a target structure surrounding an avoidance structure. The Philips Pinnacle^3 treatment planning system (Philips Healthcare, Koninklijke Philips N.V.) was used to contour the PTV and brainstem. The exported DICOM RT (structure) files were imported into a MATLAB (The MathWorks, Inc.) script and the brainstem and PTV contours were mapped over the CT coordinates. A dose grid was created in MATLAB to match that defined in Pinnacle^3. The dimensions were 42 × 43 × 9 voxels with resolutions of 2 mm, 2 mm and 3 mm in the x, y and z dimensions, respectively. The dose grid was twice as large as the CT pixel size in the x and y dimension and equivalent to the CT resolution in the z dimension. A reduced number of slices (9) was required due to memory restrictions encountered during the export of pencil beamlet doses.An IMPT treatment plan was created in the Pinnacle^3 research release of proton pencil beam scanning (PBS). Two beams were targeted at the PTV from angles of 80^o and 280^o, containing 574 and 564 beamlets, respectively. A range shifter of 7.5 cm thickness was used with both beams to ensure proximal PTV coverage. Distal and proximal margins for pencil beam placement were automatically calculated as a percentage of proton range. The dose grid resulting from each unit intensity beamlet was exported from Pinnacle^3. Beamlet parameters were set to 80% layer overlap, a lateral spot resolution of 0.6 cm, a lateral target margin of 0.4 cm and 3 standard deviation dose spread during dose calculation. Dose was calculated with the analytical PBS algorithm which includes nuclear attenuation and an energy and material dependent multiple Coulomb scattering model.For each structure A-matrices were created by combining the geometry defined by the DICOM RT structures and the dose grid obtained for each beamlet. Each 3D beamlet dose grid was rearranged to a 1D vector which became a column of an A-matrix. Each row of the A_OAR matrix corresponded to a voxel of the brainstem and likewise each row of the A_PTV matrix corresponded to a voxel of the PTV.Two DVCs were tested for the base of skull chordoma IMPT treatment plan (see Table <ref>). The DVCs differed in the dose objectives for the brainstem while keeping the PTV objectives constant. The same DVCs were applied consistently for both the DVSF algorithm (Algorithm <ref>) and Pinnacle^3.Independent values for the parameter γ of (<ref>) were used for the OAR and PTV and are denoted by γ_PTV and γ_OAR. These values were determined from the structure-specific calculation of θ in (<ref>), denoted by θ_PTV and θ_OAR. The relaxation parameters λ_k of (<ref>) are fixed throughout the iterations and represented by λ_PTV and λ_OAR.§ RESULTS §.§ Simplified 2D C-shaped geometryThe dose distributions following inverse planning for Prescriptions 1 and 4 in Table <ref> are shown in Figure <ref>(b) and <ref>(c). The dose-volume histograms following inverse planning for all cases listed in Table <ref> are presented in Figure <ref>.The dose distributions (Figure <ref>) allow for a qualitative assessment of the functionality of the DVSF algorithm (Algorithm <ref>). It is evident that the dose resulting from unit intensity pencil beamlets is successfully modulated toward the desired dose distribution. However, for a quantitative assessment the dose-volume histograms must be considered. When Prescription 1 DOCs were applied the dose objectives on the PTV structure could not be met (Figure <ref>(a)). Introducing the DVC on the OAR structure relaxed these conditions and resulted in satisfaction of the dose objectives on the PTV structure (Figure <ref>(b)). While the DVC on the OAR structure was not achieved in Prescription 2, continued iterations would have resulted in a dose distribution approaching the DVC more closely. The DVSF algorithm (Algorithm <ref>) was shown to function with multiple DVCs applied to a single structure (Figure <ref>(c)), and with DVCs applied to multiple structures (Figure <ref>(d)). §.§ Clinical 3D geometryCumulative DVHs for Prescription 1 of Table <ref> using the DVSF algorithm (Algorithm <ref>) and that produced by the Pinnacle^3 inverse planning algorithm are shown in Figure <ref>. All constraints of the less challenging dose objectives were met by the DVSF algorithm (Algorithm <ref>) whereas Pinnacle^3 exceeded the maximum dose for the OAR and did not satisfy the PTV minimum dose constraint. It should be noted that the Pinnacle^3 inverse planning was run only once with unit weighting on all dose objectives. It is possible that alteration of the objective weightings by trial-and-error may have resulted in a more desirable dose distribution. However, the objective of the current work was to compare the inherent ability of the algorithms to satisfy the inverse problem, and as such, iterative plan refinement by altering objective weights was not considered. Dose distributions for Prescription 1 of Table <ref> are shown in a single axial slice in Figure <ref>. The DVSF algorithm (Algorithm <ref>) showed higher conformality of the target structure.Cumulative DVHs for Prescription 2 of Table <ref> are shown in Figure <ref>(b). It is clear that both the DVSF algorithm (Algorithm <ref>) and Pinnacle^3 had more difficulty meeting the dose objectives in this case. The DVSF algorithm (Algorithm <ref>) was better able to meet the hard dose constraints when compared to Pinnacle^3 but the latter was closer to meeting the D_95≥ 70 Gy DVC on the PTV. Dose distributions for Prescription 2 of Table <ref> are shown in a single axial slice in Figure <ref>. Both dose distributions show cold spots in the target region.§ DISCUSSION AND CONCLUSIONA new DVSF algorithm (Algorithm <ref>) based on feasibility-seeking has been successfully applied to IMPT inverse planning in the current work. The proposed DVSF algorithm (Algorithm <ref>) is based on a modification of the CQ-algorithm of Byrne <cit.> and is capable of directly incorporating the DVCs associated with radiation therapy prescriptions into the split feasibility-seeking problem statement. Our DVSF algorithm (Algorithm <ref>) is not restricted to IMPT and is equally applicable to other forms of IMRT inverse planning. Test cases consisted of a simplified 2D C-shape target surrounding an avoidance structure and a clinical base of skull chordoma abutting the brainstem.The DVSF algorithm (Algorithm <ref>) performs orthogonal projections to satisfy both the DVCs and the lower and upper dose constraints. The AMS cyclic projection method <cit.> was implemented for single-sided inequality dose objectives and the ARM algorithm of <cit.> was implemented for interval inequalities (i.e., upper and lower dose bounds for a given structure).A series of experiments were performed with 2D C-shaped geometry using varying DVCs to validate the functionality of our DVSF algorithm (Algorithm <ref>). While DVC aims were not met in all cases within the allowed number of iterations, the shape of the DVH curve verified that the algorithm was attempting to meet these objectives. Experimentation with user-defined relaxation parameter values γ and λ was performed to investigate the effect of these settings on algorithmic performance. When λ was left at the fixed value of 1, it was found that γ values closer to the upper allowable limit of 2/θ were required to meet the DVC aims. Further work concerning automatic choice of these user-defined parameters is currently being undertaken and will be presented in future investigations.A clinical 3D IMPT treatment geometry was also investigated. The performance of the DVSF algorithm (Algorithm <ref>) was compared to that of the research release of Pinnacle^3 with proton pencil beam scanning. The shape of DVHs differed for the two inverse planning algorithms. For the prescriptions investigated, our DVSF algorithm (Algorithm <ref>) was found to result in a more conformal dose distribution when assessing isodose contours and DVH distributions. It is acknowledged that the dose distributions obtained with Pinnacle^3 may be improved with the addition of planning structures. However, to allow for a comparison of the inverse planning algorithms directly, no such structures were included in the treatment planning method.While the implementation of the DVSF algorithm (Algorithm <ref>) was sequential in the current work, the structure of the algorithm lends itself to parallelization. For example, block-iterative or string-averaging projection operators may be used when performing the orthogonal projections described in Step 3 of Algorithm <ref>. Such implementations will not only have benefits in computational speed, but may also result in superior dose distributions, as has been observed in the use of these algorithms in tomographic image reconstruction <cit.>. Further work will examine the potential of block-iterative and string-averaging algorithmic schemes for the DVSF algorithm (Algorithm <ref>).§ APPENDIX Here is a proof of formula (<ref>) for the projection calculation onto the non-convex set Q. We show that the following translation formulaP_Q(z)=P_Q(z-b^1)+b^1holds true for every z∈ R^m_1, despite the fact that P_Q and P_Q are set-valued, i.e., a point z∈ R^m_1 might have more than one projection onto the set. Note thatQ=Q-b^1.By the definition of projection of a point onto a set,q_0∈ P_Q(z) if and only if q_0∈ Q and ‖ z-q_0‖≤‖ z-q‖, for all q∈ Q.Similarly, (q_0-b^1)∈ P_Q(z-b^1) if and only if (q_0-b^1)∈Q̂ and‖(z-b^1)-(q_0-b^1)‖≤‖(z-b^1)-q‖holds for every q∈Q. Therefore, by (<ref>), (<ref>) and (<ref>), we have the following equivalencesq_0∈ P_Q(z) ⟺ q_0∈ Q and∀_q∈ Q‖ z-q_0‖≤‖ z-q‖⟺(q_0-b^1)∈Q and∀_q∈ Q ‖(z-b^1)-(q_0-b^1)‖≤‖(z-b^1 )-(q-b^1)‖⟺(q_0-b^1)∈Q and∀_q∈Q ‖(z-b^1)-(q_0-b^1)‖≤‖(z-b^1)-q‖⟺(q_0-b^1)∈ P_Q(z-b^1)⟺ q_0∈ P_Q(z-b^1)+b^1,which completes the proof. Acknowledgments. We thank the two anonymous referees for their constructive comments which helped us improve the paper. The work of Y. Censor and R. Schulte was supported by Research Grant No. 2013003 of the United States-Israel Binational Science Foundation (BSF) and by Award No. 1P20183640-01A1 of the National Cancer Institute (NCI) of the National Institutes of Health (NIH). The authors thank Philips for technical assistance with Pinnacle^3 software for this research.ieeetr
http://arxiv.org/abs/1702.07925v1
{ "authors": [ "S. Penfold", "R. Zalas", "M. Casiraghi", "M. Brooke", "Y. Censor", "R. Schulte" ], "categories": [ "physics.med-ph", "math.OC" ], "primary_category": "physics.med-ph", "published": "20170225172538", "title": "Sparsity constrained split feasibility for dose-volume constraints in inverse planning of intensity-modulated photon or proton therapy" }
cihan.bayindir@isikun.edu.tr Engineering Faculty, Işık University, İstanbul, Turkey In this paper we propose an efficient tomographic approach for the early detection of 2D rogue waves. The method relies on the principle of detecting conical spectral features before rogue wave becomes evident in time. More specifically, the proposed method is based on constructing the 1D Radon transforms of the emerging conical 2D spectra of the wavefield using compressive sampling (CS) and then constructing 2D spectra from those projections using filtered back projection(FBP) algorithm. For the 2D rogue wave models we use the radially symmetric Peregrine soliton and Akhmediev-Peregrine soliton solutions of the nonlinear Schrödinger equation, which can model characteristics of the peaked structure of 2D rogue waves and their conical spectra which may be treated as a sparse signal. We show that emerging conical spectra of 2D rogue waves before they become evident in time can be acquired efficiently by the proposed method.42.65.-k, 42.65.Tg, 47.35.Bb, 42.65.KyA Tomographic Approach For the Early Detection of 2D Rogue Waves Cihan Ahmet Bayındır December 30, 2023 ================================================================ § INTRODUCTION Rogue (freak) waves are generally described as high amplitude waves with a height bigger than 2-2.2 times the significant waveheight in a stochastic wavefield <cit.>. They have been extensively studied in recent years in the fields including but are not limited to hydrodynamics, optics, quantum mechanics, Bose-Einstein condensation, acoustics and finance, just to name a few <cit.>. The research has started with the investigation of the nonlinear Schrödinger equation (NLSE). Discovery of the unexpected rational rogue wave solutions of the NLSE resulted in seminal studies of rogue waves, such as <cit.>. Rogue wave dynamics of some of the extensions of the NLSE, such as the Sasa-Satsuma and the Kundu-Eckhaus equations, are also studied recently <cit.>. It is natural to expect that in a medium whose dynamics are governed by nonlinear equations such as the NLSE and NLSE, rogue waves can also emerge, therefore investigation of the dynamics of different models needs further attention. Development of the rogue wave early warning systems and technology is an active area of research and is crucially important for the marine environment to safeguard the ocean travel, oceanic structures and machinery such as wave energy harvesters <cit.>. Two of the few early detection methods proposed in 1D are to use the emerging triangular Fourier spectra (i.e. triangular supercontinnum generation)to detect if a rogue wave is going to emerge and to use the emerging wavelet spectra to locate its emergence location <cit.>. These methods work well for the single rogue waves observed in fiber optics and hydrodynamic wave flumes and lead to early warning time scales on the order of the temporal width of the rogue wave. However enhancement of the early warning times for stochastic wavefields requires further attention and development of realistic solutions such as the development of the electronic equipment to capture rogue wave emergence may take many long efforts. To our best knowledge the early detection rogue waves are only studied in 1D and no studies exist about the early detection mechanisms of 2D rogue waves. With this motivation, we analyze the spectral properties of 2D rogue waves. Since the correct form of the 2D NLSE is not integrable, we use a radially symmetric version of the 1D NLSE and its Peregrine and Akhmediev-Peregrine solitons solutions. Although this form of the 2D NLSE does not rely upon an analytical basis, it can exhibit the characteristics of the localized peaked structures of 2D rogue wave profiles and their conical spectral forms, very similar to the 1D case. We propose to use the emerging conical spectra of the 2D rogue waves before they become evident in time as an early detection technique and thus we discuss their dynamics. With this aim, we propose an efficient method for the acquisition of the emerging conical 2D rogue wave spectra. We first construct the 1D Radon transforms of the emerging conical 2D spectra of the wavefield using CS. Then we construct 2D spectra from those projections using FBP. Since emerging 2D conical spectra can be treated as a sparse signal, the method can successfully capture the emerging conical spectra. We numerically show that this approach can produce indistinguishable results from the classical sampling approach, but it supersedes classical sampling approach due to greatly reduced sampling requirement. § METHODOLOGY §.§ Review of the Nonlinear Schrödinger EquationThe 2D dynamics of nonlinear ocean waves, optical waves and quantum vibrations can be modeled by the 2D NLSE <cit.>. Since 2D NLSE is not integrable, some integrable extensions are proposed in the literature which admits ration soliton solutions <cit.>. However, whether they can model the realistic dynamics of 2D rogue waves or not is a question which needs further attention. In order to analyze the early detection mechanism of the 2D rogue waves, we consider radially symmetric version of the 1D NLSE in this study. Although this 2D model does not have an analytical basis it can exhibit the localized peak structures of the rogue waves. The radially symmetric rational soliton solutions of the NLSE can be used to understand the dynamics of 2D rogue waves, which are accepted as accurate rogue wave models in 1D <cit.>. Thus we consider the radially symmetric NLSE given asiψ_t + 1/2ψ_rr +|ψ|^2 ψ =0where r=√(x^2+y^2),t are the spatial and temporal variables, i is the imaginary number and ψ is the complex amplitude known as the wavefunction in optics and quantum mechanics but the wavefield envelope in hydrodynamics. This notation is mainly used in hydrodynamics and quantum mechanics whereas t and r axes are switched in fiber optics studies, where NLSE is used to describe the dynamics of light pulses in nonlinear fiber optical media. It is known that the NLSE given by Eq.(<ref>) admits many different types of analytical solutions among which the first and higher order rational soliton solution are considered as accurate rogue wave models <cit.>. For stochastic wavefields where the analytical solution is unknown, the NLSE can be numerically solved by some numerical techniques such as the spectral method <cit.>.However in this study we limit ourselves with the analytical solutions of the NLSE. The radially symmetric 2D Peregrine soliton can be written asψ_1=[1-41+2it/1+4r^2+4t^2] exp[it]where t and r denotes the time and space, respectively <cit.>. The Peregrine soliton is only a first order rational soliton solution in the Darboux hierarchy of the NLSE and higher order rational soliton solutions do exist <cit.>. Throughout many simulations <cit.> and some experiments <cit.>, it has been confirmed that rogue waves can be in the form of the first (Peregrine) and higher order rational soliton solutions of the NLSE. Second order rational soliton solution of the NLSE is Akhmediev-Peregrine soliton <cit.>, which is considered to be a model for rogue waves with higher amplitude than the Peregrine soliton. The formula of Akhmediev-Peregrine soliton is given asψ_2=[1+G_2+it H_2/D_2] exp[it]whereG_2=3/8-3r^2-2r^4-9t^2-10t^4-12r^2t^2 H_2=15/4+6r^2-4r^4-2t^2-4t^4-8r^2t^2andD_2=1/8[ 3/4+9r^2+4r^4+16/3r^6+33t^2+36t^4+16/3t^6-24r^2t^2+16r^4t^2+16r^2t^4 ]where t is the time and r is the space parameter <cit.>. Using Darboux transformation formalism this soliton can be obtained using the Peregrine soliton as the seed solution <cit.>. Many numerical simulations also confirm that rogue waves in the NLSE framework can also be in the form of Akhmediev-Peregrine soliton <cit.> however to our best knowledge an experimental verification of this soliton do not exist yet. We use 2D radially symmetric versions of the Peregrine and Akhmediev-Peregrine solitons as 2D rogue wave models. §.§ Review of the Compressive Sampling Compressive sampling (CS) is an efficient sampling technique which exploits the sparsity of the signal for its reconstruction by using far fewer samples than the requirements of the classical Shannon-Nyquist sampling theorem states<cit.>. CS has been intensively studied as a mathematical tool in applied sciences and engineering and currently some engineering devices such as the single pixel video cameras and efficient A-D converters relies on CS algorithm.We try to give a very brief summary of the CS in this section and refer the reader to <cit.> for a comprehensive discussion and derivation.Let ψ be a K-sparse signal with N elements, that is only K of the N elements of ψ are nonzero. Using orthonormal basis transformations with transformation a matrix of Ψ, ψ can be represented in any transformed domain in terms of the basis functions. Most common orthogonal transformations used in the literature are the Fourier, wavelet or discrete cosine transforms. Using the orthogonal transformation it is possible to rewrite the signal as ψ= Ψψ where ψ is the coefficient vector. Keeping the non-zero coefficients and discarding the zero coefficients of ψ, it is possible to get ψ_s= Ψψ_swhere ψ_s denotes the signal with non-zero entries only.CS algorithm guarantees that a K-sparse signal ψ which has N elements can exactly be reconstructed from M ≥ C μ^2 (Φ,Ψ) Klog (N) measurements with a very high probability. In here C is a positive constant and μ^2 (Φ,Ψ) is the mutual coherence between the sensing Φ and transform bases Ψ <cit.>. Taking M projections randomly and using the sensing matrix Φ the sampled signal can be written as g=Φψ. Therefore the CS problem can be rewritten asminψ_l_1under constraintg=ΦΨψwhere ψ_l_1=∑_i | ψ_i|. So that, among all signals that satisfy the given constraints mentioned above, the l_1 minimization solution of the problem is ψ__CS =Ψψ.l_1 minimization is only one of the techniques that can be used for finding the solution of this optimization problem and other methods exist <cit.>. Details of the CS can be seen in <cit.>. In the current study we use the sparsity property of the 1D Radon transforms of the emerging conical 2D rogue wave spectra.§.§ Review of the Filtered Back Projection AlgorithmIn this section we sketch a very brief review of the FBP algorithm. The projections of a 2D function ψ(x,y), which refers to the envelope of the wavefield or probability of finding an atomic particle at a specific (x,y) at a given time in our study, can be computed using the Radon transform asψ_R (r, θ)=∫∫ψ(x,y) δ(r-x cosθ-y sinθ)dxdywhere θ is the projection angle defined from the x axis. In a typical computerized tomography approach first these projections are obtained, then the full image is backprojected from these projections. However it is known that unfiltered tomographic data results in a high intensity blurring at the center of the image. In order to remove such an artifact, generally a filter is applied. Here we use a ramp filter applied in the Fourier domain asψ(ρ,θ)= F_r^-1| k_r | F_r ψ_R (r, θ)where F_r and F_r^-1 show the forward and inverse Fourier transform operations, respectively and ρ is the radial wavenumber parameter. However, other choices of the filter also exist. Then the image can be reconstructed from these projections by means of the back projection operation given asψ(x,y) = B ψ(ρ,θ)= ∫_0^πψ (x cosθ+y sinθ) dθIn a typical computed tomography approach, this integral is evaluated in a discrete fashion. The process summarized here is known as the FBP algorithm of the computed tomography. The reader is referred to <cit.>, for a comprehensive discussion of the FBP algorithm.§.§ Proposed Method In this paper we propose using the conical spectral features before 2D rogue waves becomes evident in time as an early detection mechanism. To efficiently measure such emerging spectra we propose a tomographic approach. We first construct the 1D Radon transforms of the emerging conical 2D spectra of the wavefield using CS. This principle works because such projections are sparse signals, with nonzero entries are located around central wavenumber. Then, we construct 2D spectra from those projections using FBP. For the radially symmetric versions of the Peregrine and Akhmediev-Peregrine solitons we show that emerging conical spectral features of 2D rogue waves before they become evident in time can be acquired efficiently by the proposed method.The tomographic method proposed in here does not necessarily have to be used with the same reconstruction techniques. For example, the CS can be utilized by random selection of the projection angles rather than equally spaced projection angles. Instead of using FBP, it is possible to use reconstruction techniques such as inverse Radon transform, Fourier domain reconstruction algorithm and ordered subsets expectation maximization techniques, just to name a few. All would have some advantages and disadvantages, but the underlying tomographic approach for the early detection of 2D rogue waves would be same in principle for all such techniques.§ RESULTS AND DISCUSSION §.§ Early Detection of the 2D Peregrine Soliton by the Proposed MethodIn this section we numerically test the proposed algorithm for radially symmetric 2D Peregrine soliton. In the first step we take random samples along a slice in the physical domain to obtain the emerging triangular 1D spectra at various times. Then by applying thel_1 minimization of the CS algorithm to those random samples acquired in the physical domain, we obtain the sparse triangular spectra. A result obtained this way is depicted in Fig. <ref>. In Fig. <ref>a, we show the 1D Peregrine soliton at times t=0 and t=2. Fig. <ref>b we compare the triangular spectra of the Peregrine soliton at t=0 obtained by classical and compressive sampling. The normalized root-mean-square (nrms) difference between these two spectra depicted in Fig. <ref>b are 1.56 × 10^-10. We repeat the same procedure at t=2 and compare the triangular spectra of the Peregrine soliton at t=2 obtained by classical and compressive sampling in Fig. <ref>c, where the nrms difference between these two spectra is 7.91× 10^-04. Both of these results are obtained using N=1024 classical and M=64 compressive samples. Due to time reversal property of the phenomena studied in the frame of the NLSE, the results for t=2 is no different than results for the t=-2, thus they may be used for early detection purposes. Additionally, the detection of emerging triangular spectra can be performed starting around t=-5 and may be longer early detection times in the Kundu-Eckhaus equation regime <cit.>. We also observe that the CS is capable of constructing the triangular spectra with far fewer samples than M=64 when the rogue wave is at its peak at t=0. The use of the CS for the early detection of the 1D rogue waves is introduced and studied in <cit.>. For the 2D tomographic approach proposed above the 1D Radon transforms, i.e. the projections, of the 2D wave surface should be obtained. We obtain those projections using the perpendiculars to the slices shown above where the necessary summations are done discretely. However this is not a must, 1D Radon transforms can directly be measured using compressive samples. In Fig. <ref> the radially symmetric 2D version of the Peregrine soliton at t=0 and in Fig. <ref> its conical spectra obtained by N_x=N_y=1024 classical samples are depicted. This conical spectra begins to develop around t=-5, thus it can be used for the early detection of the 2D radially symmetric Peregrine soliton.In Fig. <ref>, we present the same rogue wave spectrum obtained by the tomographic approach proposed above, where 1D Radon transforms are computed using M=64 compressive samples along each lines equally spaced with angles of 0:1:179 degrees and then the FBP algorithm is used for the reconstruction of the 2D spectrum from those projections. A comparison of the results depicted in Fig. <ref> and in Fig. <ref> indicate that the proposed tomographic approach can successfully capture the spectral features of the 2D rogue waves, thus enables their early detection. In order to discuss the effects of using less projections in the tomographic approach for the early detection of the Peregrine soliton, we depict the spectra in 3D and in contour map format obtained using 9 projection at angles of 0:20:160 degrees in Fig. <ref> and in Fig. <ref>, respectively. As expected, as the number of projections decrease the capture of the conical spectral shape of the emerging rogue wave becomes harder. At central wavenumbers, the conical peak still appears and may be useful for early detection purposes, but it is surrounded by other spectral components which makes it harder to recognize if the emerging wave is a rogue wave. One possible technique to reduce the defects of small number of projections is to select projection angles randomly, which may lead to more accurate results since CS would perform better for a sparse signal when selections are random. §.§ Early Detection of the 2D Akhmediev-Peregrine Soliton by the Proposed MethodNext we turn our attention to the radially symmetric Akhmediev-Peregrine soliton and assess the applicability of the proposed approach for its early detection. In Fig. <ref>a, we show the 1D Akhmediev-Peregrine soliton at times t=0 and t=2. In Fig. <ref>b we compare the triangular spectra of the Akhmediev-Peregrine soliton at t=0 obtained by classical and compressive sampling. The normalized root-mean-square (nrms) difference between these two spectra depicted in Fig. <ref>b are 0.0016. We again repeat the same procedure at t=2 and compare the triangular spectra of the Akhmediev-Peregrine soliton at t=2 obtained by classical and compressive sampling in Fig. <ref>c, where the nrms difference between these two spectra is 0.0027. Similar to the Peregrine soliton case, both of these results are obtained using N=1024 classical and M=64 compressive samples. We also observe that, similar to the Peregrine soliton case, the CS is capable of constructing the triangular spectra with far fewer samples than M=64 when the Akhmediev-Peregrine soliton is at its peak at t=0<cit.>. In Fig. <ref> the radially symmetric 2D version of the Akhmediev-Peregrine soliton at t=0 and in Fig. <ref> its conical spectra obtained by N_x=N_y=1024 classical samples are depicted. This conical spectra begins to develop around t=-5, thus it can be used for the early detection of the 2D radially symmetric Akhmediev-Peregrine soliton, as in the case of the Peregrine soliton discussed above. In Fig. <ref>, we present the same Akhmediev-Peregrine rogue wave spectrum obtained by the tomographic approach proposed above, where 1D Radon transforms are computed using M=64 compressive samples along each lines equally spaced with angles of 0:1:179 degrees and then the FBP algorithm is used for the reconstruction of the 2D spectrum from those projections. Again, a comparison of the results depicted in Fig. <ref> and in Fig. <ref> indicates that the proposed tomographic approach can successfully capture the spectral features of the 2D Akhmediev-Peregrine soliton, thus enables their early detection before they become evident in time using spectral data.§ CONCLUSIONIn this paper we have proposed an efficient method for the early detection of 2D rogue waves. We have showed that as for the early detection of the 1D rogue waves their emerging triangular spectra can be used; so the emerging 2D conical rogue wave spectra of the 2D rogue waves can be used for their early warning. We have proposed and numerically tested a method which can efficiently be used to detect 2D rogue wave emergence. In the proposed method we have constructed the 1D Radon transforms of the emerging conical 2D spectra of the wavefield using CS and then constructed 2D spectra from those projections using FBP. We have showed that the proposed approach can successfully and efficiently detect the single rogue wave emergence in 2D, with early warning times around the temporal width of the rogue wave peak, similar to 1D case. As a future work experimental verification of the proposed method would be necessary. It should also be tested for the analytical rogue waves solutions of NLSE type equations which are physically significant, as well as for the stochastic wavefields that are triggered by the modulation instability. Additionally, other options for the tomographic acquisition technique do exist. These include but are not limited to using CS with random projections instead of equally spaced projections and using other reconstruction algorithms such as the inverse Radon transform, Fourier domain reconstruction algorithm and the ordered subsets expectation maximization techniques instead of the FBP.00Kharif C. Kharif and E. Pelinovsky. European Journal of Mechanics, B: Fluids.6, 603 (2003). Akhmediev2009b N. Akhmediev, A. Ankiewicz and J. M. Soto-Crespo, Phys. Rev. E,80, 026601(2009).bayindir2016C. Bayındır, Phys. Lett. A, 380, 156 (2016).Akhmediev2009a N. Akhmediev, J. M. Soto-Crespo and A. Ankiewicz, Phys. Lett. A,373, 2137 (2009).Akhmediev2011 N. Akhmediev, J. M. Soto-Crespo, A. Ankiewicz and N. Devine, Phys. Lett. A,375, 2999(2011).FirstOpticalRW D. R. Solli, C. Ropers, P. Koonath and B. Jalali, Nature, 450, 1054 (2007). Bay_Zeno C. Bayındır and F. Ozaydin. arXiv Preprint,arXiv:1701.01997 (2017).Bay_arxNoisyTun C. Bayındır. arXiv Preprint,arXiv:1604.06604 (2016).Bay_arxChaotCurNLS C. Bayındır. arXiv Preprint, arXiv:1512.03584 (2016).Soto2014RwSSchaotic J. M. Soto-Crespo, N. Devine, N.P. Hoffmann and N. Akhmediev. Phys. Rev. E,90, 032902 (2014).BayPRE1 C. Bayındır. Phys. Rev. E,93, 032201 (2016).BayPRE2 C. Bayındır. Phys. Rev. E,93, 062215 (2016).Bay_arxNoisyTunKEE C. Bayındır. arXiv Preprint,arXiv:1602.05339 (2016).Bay_arxEarlyDetectCS C. Bayındır. arXiv Preprint,arXiv:1602.00816 (2016).Zakharov1968 V. E. Zakharov. Soviet Physics JETP,2, 190 (1968).KunduArxiv A. Kundu, A. Mukherjee and T. Naskar. arXiv Preprint, arXiv:1204.0916 (2015).bay2009C. Bayındır, MS Thesis, University of Delaware (2009).demiray H. Demiray and C. Bayındır, Phys. of Plasmas, 22, 092105 (2015).BayTWMS2016 C. Bayındır. TWMS J. App. & Eng. Math.,6, 135 (2016).bay_cssfm C. Bayındır, TWMS J. App. & Eng. Math., 5, 298(2015).Karjadi2010 E. A. Karjadi, M. Badiey and J. T. Kirby, J. Acoust. Soc. Am., 127, 1787 (2010).Karjadi2012 E. A. Karjadi, M. Badiey, J. T. Kirby and C. Bayındır, IEEE J. Oceanic Eng., 37, 112 (2012).Bay_cssfmarx C. Bayındır. arXiv Preprint, arXiv:1512.03932 (2016).bayindir2016nature C. Bayındır, Sci. Rep., 6, 22100 (2016).canuto C. Canuto. Spectral Methods: Fundamentals in Single Domains, Springer-Verlag (2006).trefethen L. N. Trefethen. Spectral Methods in MATLAB, SIAM, Philadelphia (2000).Peregrine D. H. Peregrine, J. Austral. Math. Soc. B.,25, 16 (1983).Kibler B. Kibler, J. Fatome, C. Finot, G. Millot, F. Dias, G. Genty, N. Akhmediev and J. M. Dudley, Nat.Phys.,6, 790(2010).CandesE. J. Candes, Proc. Int. Cong. Math., 3, 1433, (2006).Candes2006 E. J. Candes, J. Romberg and T. Tao. IEEE Trans. Inform. Theory, 52, 489-509, (2006).Dudgeon D. E. Dudgeon and R. M. Mersereau. Multidimensional Digital Dignal Processing, Prentice-Hall, (1984).
http://arxiv.org/abs/1703.03285v1
{ "authors": [ "Cihan A. Bayindir" ], "categories": [ "physics.flu-dyn", "nlin.PS", "physics.optics" ], "primary_category": "physics.flu-dyn", "published": "20170226201611", "title": "A Tomographic Approach For the Early Detection of 2D Rogue Waves" }
Backhaul-aware Robust 3D Drone Placementin 5G+ Wireless Networks Elham Kalantari1, Muhammad Zeeshan Shakir2, Halim Yanikomeroglu3, and Abbas Yongacoglu1 1School of Electrical Engineering and Computer Science University of Ottawa, Ottawa, ON, Canada, Email: {ekala011, yongac}@uottawa.ca 2School of Engineering and Computing University of the West of Scotland, Paisley, Scotland, UK, Email: muhammad.shakir@uws.ac.uk 3Department of Systems and Computer Engineering Carleton University, Ottawa, ON, Canada, Email: halim@sce.carleton.ca========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Using drones as flying base stations is a promising approach to enhance the network coverage and area capacity by moving supply towards demand when required. However deployment of such base stations can face some restrictions that need to be considered. One of the limitations in drone base stations (drone-BSs) deployment is the availability of reliable wireless backhaul link. This paper investigates how different types of wireless backhaul offering various data rates would affect the number of served users. Two approaches, namely, network-centric and user-centric, are introduced and the optimal 3D backhaul-aware placement of a drone-BS is found for each approach. To this end, the total number of served users and sum-rates are maximized in the network-centric and user-centric frameworks, respectively. Moreover, as it is preferred to decrease drone-BS movements to save more on battery and increase flight time and to reduce the channel variations, the robustness of the network is examined as how sensitive it is with respect to the users displacements. § INTRODUCTIONUtilization of drone base stations (drone-BSs) in wireless cellular networks has recently attracted a lot of attention as a promising solution to temporarily increase capacity or coverage of an area in 5G+ networks. Drone-BSs can assist a ground network of BSs in providing high data rate coverage whenever and wherever there is an excessive need, especially in situations when this demand occurs in a rather difficult-to-predict manner <cit.>. Due to fast deployment of drone-BSs they can also address temporary coverage issues in remote or sparsely populated areas, or when terrestrial wireless infrastructure is damaged due to a natural disaster. Fig. <ref> is an illustrative diagram representing some use cases of drone-BSs in future networks. As depicted in this figure, a drone-BS can assist ground network of base stations to inject capacity and prevent temporary congestion in places such as stadiums. It can also provide additional coverage in remote areas or when the ground base stations are out of order due to inclement weather conditions, vandalism, transmission problems, etc. §.§ Related WorksThere are a growing number of papers related to integration of drone-BSs in cellular networks discussing drone-BS placement, various use-cases, and design and management challenges. In <cit.>, a novel framework of multi-tier drone-BSs complementing terrestrial heterogeneous networks (HetNets) is envisioned, and advancements and challenges related to the operation and management of drone-BSs are discussed. In <cit.>, design and implementation challenges of an aerial network of base stations is reported and the capabilities of different aerial platforms for carrying wireless communication systems is reviewed. In <cit.>, a vertical backhaul/fronthaul framework is suggested for transporting the traffic between the access and core networks in a typical HetNet through free space optical (FSO) links. 3D placements of drone-BSs is considered as one of the important problems to design and implement drone-BS enabled HetNets.There are a few works related to placement of drone-BSs in wireless cellular networks. In <cit.>, the authors find the minimum number of drone-BSs and their 3D placement to cover a number of users with high data rate requirement through a heuristic algorithm. They find that in a dense area, a drone-BS will decrease its height to make less interference for farther users that are not served by it, and in a low density region, it will increase its altitude to cover a larger area and serve more users. In <cit.>, the authors find 3D placement of a drone-BS to maximize the number of covered users through numerical methods. In <cit.>, a closed-form expression for the probability of line-of-sight (LoS) connection between an aerial platform and a receiver is developed and through an analytical approach the optimum altitude that maximizes the radio coverage is obtained. In <cit.>, the optimal altitude of a drone-BS that achieves a required coverage with minimum transmit power is found. Also providing maximum coverage with two drone-BSs in the presence and absence of interference is investigated. Reference <cit.>, derives the downlink coverage probability of a drone-BS as a function of the altitude and the antenna gain and then determines the locations of drone-BSs in such a way that the total coverage area is maximized. Despite all these recent research, wireless backhaul between the drone-BSs and the core network, is not considered yet as a limiting factor in design and implementation of drone-BS enabled HetNets.§.§ Our ContributionThe major difference between a ground-BS and a drone-BS is that the latter one has a major limitation in the backhaul link. A ground-BS usually has a fixed wired/wireless backhaul connection and can relatively offer very high data rates to a core network. A drone-BS on the other hand, should have a wireless backhaul; therefore, the peak data rate a drone-BS can support is limited and it may dramatically decrease due to inclement weather conditions especially if the link is based on the FSO or mmWave technology. Therefore, an important issue, that to the best of our knowledge has not been addressed yet, is to consider the limitations and requirements of the wireless backhaul link as one of the constraints in designing and deploying the drone-BSs in future 5G+ networks.The main contribution of this paper is twofold: * We propose a backhaul limited optimal drone-BS placement algorithm for various network design parameters, such as the number of the served users or the sum-rate of theserved users for heterogeneous rate requirements in a clustered user distribution. * We investigate the robustness of drone-BS placement and study how much the users movements may affect the proposed optimal solution. The rest of this paper is organized as follows. In Section II, the system model is presented. The optimal drone-BS placement for different design parameters is described in Section III, followed by detailed presentation of the results and related discussions. Finally, conclusions are drawn in Section IV. § SYSTEM MODEL§.§ Pathloss ModelThere are a limited number of studies related to air-to-ground pathloss model. Here, we adopt the one presented in <cit.>. That study shows that there are two main propagation groups, corresponding to the receivers with LoS connections and those with non-line-of-sight connections (NLoS) which can still receive the signal from transmitter due to strong reflections and diffractions. Probability of having a LoS connection between a transmitter and a receiver is an important factor in modeling such channels and it is formulated as <cit.>, <cit.>P(LoS) = 1/1+a exp (-b(180/πθ -a)),where a and b are constant values depending on the environment (rural, urban, etc.) and θ is the elevation angle equal to arctan (h/r), where h and r are the altitude of a drone-BS and its horizontal distance from the receiver, respectively. In this model, shadowing is not considered; instead the average pathloss is presented in a probabilistic manner as <cit.>(dB) =20 log(4π f_c d/c)                       + P(LoS)η_LoS+P(NLoS)η_NLoS,where the first term is free space pathloss (FSPL) according to Friis equation. Variable f_c is the carrier frequency, c stands for the speed of light, d stands for the distance between a drone-BS and a user and is equal to √(h^2+r^2). P(NLoS)=1-P(LoS), and η_LoS and η_NLoS are the average additional losses for LoS and NLoS connections, respectively, the values of which depend on the respective environment.§.§ Spatial Users DistributionTo obtain heterogeneity in spatial user distribution, we utilize a Matérn cluster process <cit.>. It is a doubly Poisson cluster process, where parent points which are the center of clusters are created by a homogeneous Poisson process. The daughter points, that represent users in our model, are uniformly scattered in circles with radius ν around parent points by using another homogeneous spatial Poisson process. Thus, the density function,f(z), of a given user in location z isf(z)= 1/πν^2,if z≤ν, 0, otherwise. § BACKHAUL-AWARE DRONE-BS PLACEMENTWe assume that an area is already covered by ground-BSs, but due to an extensive temporal increase in the number of users or their required rates, some of them can not be served by the terrestrial network due to the lack of resources such as bandwidth. We propose to integrate drone-BS with the existing cellular network infrastructure that offers coverage to such users. The decision about which users to serve in the network, is based on the chosen approach, whether it is network-centric or user-centric. The users are assumed to operate different applications with a variety of rate requirements. The total bandwidth of the drone-BS and the wireless backhaul peak rate are the limiting factors in our formulation.For the backhaul constraint, we assume that the peak aggregate rate that the wireless backhaul link of a drone-BS can support is R Mbps; so,∑_i=1^N_Ur_i· I_i≤ R,where N_U stands for the total number of users that are not served by the terrestrial network, r_i denotes the data rate required by user i, and I_i is the user indicator function defined asI_i=1,if useriis served by the drone-BS,0,otherwise. Another limiting factor is the total available bandwidth to the drone-BS. It can be formulated as∑_i=1^N_Ub_i· I_i≤ B,where B stands for the total bandwidth of the drone-BS, and b_i denotes the bandwidth required by user i which is equal to r_i/ζ_i, where ζ_i = log_2(1+γ_i) represents the spectral efficiency andγ_i stands for the signal-to-noise ratio (SNR) of user i. Also, we assume that a user is in the coverage of the drone-BS if its quality of service (QoS) requirement is satisfied. It can be formulated as_i· I_i ≤_,∀ i,where _i stands for the pathloss when the signal is received by user i and _ is the maximum pathloss that a user can tolerate before outage based on its QoS requirement. Finally, our optimization problem is formulated as follows:max_x,y,h, {I_i}∑_i=1^N_Uα_i· I_isubject to: ∑_i=1^N_Ur_i· I_i ≤R ∑_i=1^N_Ub_i·I_i ≤B_i· I_i≤ _,  ∀ i x_min ≤x   ≤   x_maxy_min ≤y   ≤   y_maxh_min ≤h   ≤   h_maxI_i∈ {0,1},   ∀ i ,where x, y, and h are the 3D coordinates of the drone-BS placement. Variables x_min, x_max, y_min, and y_max represent the limits of the area coordinates and h_min and h_max are the minimum and maximum allowed altitudes of the drone-BS, respectively. The maximum height of a drone-BS depends on its type, size, weight, power of the battery, and other features. It may also be limited by regulatory laws. Several organizations such as US Federal Aviation Authority (FAA), transport Canada, and Canadian Aviation Regulation Advisory Council (CARAC) are working to coordinate such laws <cit.>. Variable α_i is a coefficient related to user i and it is determined based on the system, whether it is network-centric or user-centric. It also depends on the metric that is used to identify a user's priority. These concepts will be explained in more detail later in this section.We propose a centralized solution for finding the best 3D placement of a drone-BS by assuming that the global view of the network is available at a central controller. This can be implemented in the presence of the software-defined networking (SDN) architecture which decouples the control plane from the data plane. Using this approach, we find the best 3D placement of a drone-BS that maximizes the number of users served with higher priority through an exhaustive search. In each candidate coordinates of the drone-BS, the problem can be transformed to a binary integer linear programming (BILP) as given below, which can then be solved through the branch-and-bound method:max_{I_i}∑_i=1^N_Uα_i· I_isubject to: (<ref>), (<ref>), (<ref>), and (<ref>).We consider an urban region with a total area of 16 km^2. For the user distribution, we suppose that the parents, which represent cluster heads, are created by a Poisson process with an average density of 10^-7 per m^2 and daughters, which represent users, follow another Poisson distribution with an average density of 90 users per cluster. The cluster radius is taken as 700 meters. The step size to search 3D location of the drone-BS is 100 meters. The urban environment parameters and the simulation parameters are provided in Table <ref> and <ref>, respectively. We assume that the rate requirement of user i, denoted by r_i, is randomly selected from ℛ (r_i ∈ℛ) indicated in Table <ref>. Matlab software is used to carry out the simulations. §.§ Network-Centric versus User-CentricThe network may select the users based on the network-centric or the user-centric approach. In the network-centric approach, the network tries to serve as many users as possible, regardless of their rate requirements. As a result, the majority of the served users are the ones who need less data rates. In this approach, α_i in (<ref>), is equal to 1 for all the users. In the user-centric approach, values of α_i vary with the users and they are determined based on the priority of users. A large number of existing and future applications may require differentiation among the users and applications; therefore, offering service only to the users with low rates would not be fair. There are different metrics such as the sum-rate, price differentiation, signal strength, and content demand to identify users priorities. These metrics are explained below:§.§.§ Sum-RateOne method of selecting users is to maximize the total sum-rate. In this way, by setting α_i equal to r_i, the users who require higher data rates are given higher priority to access the network. In this paper, we use this metric in the user-centric approach.§.§.§ Price DifferentiationUsers may be categorized based on how much they are willing to pay for their subscribed services, for instance, as platinum, gold, and silver users. The platinum users who pay higher, want to be connected to the network under almost every condition, even if their channel is poor or they need high amount of resources. By assigning a large value to α_i to such users, the service provider makes sure that they are served.§.§.§ Signal StrengthThe selection of the users can be based on their received signal strength, so the operator first serves the ones who have favorable channel conditions.§.§.§ Content DemandIn content-aware systems, the users who need to access the network urgently based on their required content, are given higher priority.The user distribution and the 3D placement of a drone-BS in a network-centric and a user-centric approach are shown in Fig. <ref> and <ref>, respectively. It is observed that in both approaches the drone-BS moves to the highest possible altitude (h_max) to cover a larger area. As seen in this figure, in the network-centric approach more users are served compared to the user-centric approach. There is a license fee related to spectrum usage that a service provider has to pay which is based on how much bandwidth per person is utilized over a geographical area <cit.>. Therefore the network-centric approach may be a more favorable option for a service provider as it pays less for the spectrum usage. In Fig. <ref> the CDF of required rates of the served users for both approaches is depicted. As seen in this figure, the CDF curve related to the network-centric approach is above the user-centric one, meaning that in the former one, there is a higher probability of serving users with lower rates. Therefore, in total more users are served in the network-centric approach as it has been seen earlier in Fig. <ref>. §.§ Backhaul LimitationThe backhaul link in a wireless system may be dedicated or in-band.§.§.§ Dedicated BackhaulDedicated backhaul may be a FSO or mmWave link between access and core networks. Such links can provide very high backhaul capacity, but they are very sensitive to weather conditions; in foggy or rainy weather, the peak rate may dramatically decrease <cit.>.§.§.§ In-band BackhaulCurrently in LTE, Wi-Fi, WiMAX, and HSPA networks, the main technology used for the wireless backhaul links is based on RF microwave <cit.>. Microwave backhaul can be deployed very quickly at a relatively low cost. By using RF for backhaul, the same spectrum is used in both the access and backhaul links, so it causes interference and the capacity of the backhaul connection is affected.Fig. <ref> compares the number of served users versus different wireless backhaul peak rates of a drone-BS in the network-centric and user-centric approaches. This range of wireless backhaul rates represents the various rates of different types of wireless links. As seen in this figure, low backhaul rates can severely limit the number of served users. By increasing the backhaul capacity, the number of served users is increased differently in two scenarios. It will stop increasing whenthe backhaul capacity is around 150 Mbps as there is no more spectrum resource in the drone-BS to serve more users. The speed of increase in the number of served users is almost fixed in the user-centric approach (see fixed slope of the yellow dashed line in Fig. <ref>), while it is decreasing in the network-centric approach (see decreasing slope of the blue dashed lines in Fig. <ref>). The fixed slope in the user-centric approach is due to the fact that in this scenario, high rate users are served first and when wireless backhaul capacity increases, low rate users receive service, so the amount of increase in the number of served users remain fixed. In the network-centric approach, the slope is not fixed, because low rate users are served first in this scenario; therefore, only a few high rate users get service by increasing the backhaul capacity and the amount of increment is reduced in each step of increasing the backhaul capacity. §.§ RobustnessMobile drone-BSs change the radio channel persistently, so highly complicated interference management and resource allocation schemes are required. Moreover, constant movements of a drone-BS consume a lot of battery and decrease flight time. Hence, if a drone-BS flies to a predetermined good position and is not required to change its place constantly due to users movements, this will result in savings in energy and reduction in complexity. Fig. <ref> shows the impact of users movements on the performance of the network if the drone-BS stays in its position. As seen, by increasing the movement distance, number of the served users will decrease, but this reduction is not significant and as Fig. <ref> demonstrates, a very low percentage of users would be dropped out of the network if they move. For instance, if the users are moving within 100 meters, less that 2% of them in the network-centric approach and less than 1% in the user-centric approach would be disconnected. Therefore, the solution is robust. If a drone-BS flies to a suitable place, it can stay there for a while unless the network reaches a particular pre-determined user-dropped out threshold.§ CONCLUSIONIn this paper, the optimal 3D placement of a drone-BS over an urban area with users having different rate requirements is investigated. The wireless backhaul peak rate and the bandwidth of a drone-BS are considered as the limiting factors in both the network-centric and user-centric approaches in a typical HetNet. The network-centric approach maximizes the total number of served users regardless of their required rates, while the user-centric approach would maximize their sum-rate. Our investigation also shows that only a small percentage of the total served users would be in outage when the users move. This highlights the robustness of the proposed algorithm against the modest movement of users (within few meters).IEEEtran
http://arxiv.org/abs/1702.08395v2
{ "authors": [ "Elham Kalantari", "Muhammad Zeeshan Shakir", "Halim Yanikomeroglu", "Abbas Yongacoglu" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170227174239", "title": "Backhaul-aware Robust 3D Drone Placement in 5G+ Wireless Networks" }
Tensor Balancing on Statistical ManifoldMahito Sugiyama National Institute of Informatics JST, PRESTOHiroyuki Nakahara RIKEN Brain Science InstituteKoji Tsuda The University of Tokyo RIKEN AIP; NIMS December 30, 2023 =====================================================================================================================================================================================================================§ INTRODUCTIONLet S_n denote the group all permutations of n. That is, S_n is the set of all one-to-one maps :{1, …, n}→{1, …, n} under composition.If = _1 …_n ∈ S_n,then we let Des() = {i: _i >_i+1} and() =|Des()|. We say that _j isleft-to-right minima ofif _i > _j for all i < j.For example the left-to-right minima of σ=938471625 are9, 3 and 1.Given a sequence τ = τ_1 ⋯τ_n of distinct positive integers, we define the reduction of τ, (τ), to be the permutationof S_n that results by replacing the i-th smallest elementof τ by i. For example(53962) = 32541. If Γ is a set of permutations,we say that a permutation = _1 …_n ∈ S_n hasa Γ-match starting at position i if there is a i < j suchthat (_i _i+1…_j) ∈Γ.We let () denote the number of Γ-matches in . We let 𝒩ℳ_n(Γ) be the set of ∈ S_n suchthat () =0. The main goal of this paper is to study generating functionsof the form _Γ(t,x,y)=∑_n ≥ 0t^n/n!_Γ,n(x,y)where _Γ,n(x,y) =∑_∈𝒩ℳ_n(Γ)x^y^1+(). In the special case where Γ = {τ} is a set with asingle permutation τ, we shall write () for(), _τ(t,x,y) for _Γ(t,x,y), and_τ,n(x,y) for _Γ,n(x,y). There is a considerable literature on thegenerating function _Γ(t,1,1) ofpermutations that consecutively avoid a patternor set of patterns. See for example, <cit.>. For the most part, these papers do not consider generating functions of the formτ(t,1,y) or τ(t,x,y). An exceptionis the work on enumeration schemes of Baxter <cit.> who gavegeneral methods to enumerate pattern avoiding vincular patterns accordingto various permutations statistics. Our approach isto use the reciprocity method of Jones and Remmel.Jones and Remmel <cit.>developed what they called the reciprocity methodto compute the generating functionτ(t,x,y) for certain families ofpermutations τ such that τ starts with1 and (τ) =1. The basic idea of their approachis as follows. First it follows from results in <cit.> that ifall the permutations in Γ start with 1, then wecan write _Γ(t,x,y) in the form _Γ(t,x,y) = ( 1/U_Γ(t,y))^x where U_Γ(t,y) = ∑_n≥ 0U_Γ,n(y) t^n/n!. Next one writes U_τ(t,y) =1/1+∑_n ≥ 1_τ,n(1,y) t^n/n!.One can then use the homomorphism methodto give a combinatorialinterpretation of the right-hand side of (<ref>) which canbe used to find simple recursions for the coefficients U_τ,n(y).The homomorphism method derives generating functions forvarious permutation statistics byapplying a ring homomorphism defined on thering of symmetric functions Λ in infinitely many variables x_1,x_2, …to simple symmetric function identities such as H(t) = 1/E(-t)where H(t) and E(t) are the generating functions for the homogeneous and elementarysymmetric functions, respectively. That is, H(t) = ∑_n≥ 0 h_n t^n = ∏_i≥ 11/1-x_itE(t) = ∑_n≥ 0 e_n t^n = ∏_i≥ 1 1+x_it.In their case, Jones and Remmeldefined a homomorphism θ_τ onΛ by settingθ_τ(e_n) = (-1)^n/n! _τ,n(1,y). Thenθ_τ(E(-t)) = ∑_n≥0 _τ,n(1,y) t^n/n! = 1/U_τ(t,y). Hence U_τ(t,y) = 1/θ_τ(E(-t)) = θ_τ(H(t))which implies that n!θ_τ(h_n) = U_τ,n(y).Thus if we can compute n!θ_τ(h_n) for all n ≥ 1, then we cancompute the polynomials U_τ,n(y) and the generating functionU_τ(t,y), which in turn allows us to computethe generating function _τ(t,x,y).Jones and Remmel <cit.> showed that one can interpretn!θ_τ(h_n) as a certain signed sum of weights of filled labeled brick tabloids when τ starts with 1and (τ)=1. They then defined a weight-preserving sign-reversinginvolution I on the set of such filled labeled brick tabloids whichallowed them to give a relatively simple combinatorial interpretationfor n!θ_τ(n_n). They also showed how such a combinatorial interpretation allowed them to prove that the polynomials U_τ,n(y) satisfy simple recursions for certain families of such permutations τ. For example, in <cit.>, Jones and Remmelstudiedthe generating functions _τ(t,x,y) for permutations τ of the form τ = 1324⋯ p where p ≥ 4.Using the reciprocity method, they proved that U_1324,1(y)=-y and for n ≥ 2, U_1324,n(y) = (1-y)U_1324,n-1(y)+ ∑_k=2^⌊ n/2 ⌋(-y)^k-1 C_k-1 U_1324,n-2k+1(y)where C_k = 1/k+12kk is the k-th Catalan number. They also proved that forany p ≥ 5,U_1324 ⋯ p,n(y) =-y and for n ≥ 2, U_1324⋯ p,n(y)=(1-y)U_1324⋯ p,n-1(y)+∑_k=2^⌊n-2/p-2⌋+1(-y)^k-1U_1324⋯ p,n-((k-1)(p-2)+1)(y).Bach and Remmel <cit.> extended this reciprocity method tostudy the polynomialsU_Γ,n(y) in the case where Γ is a set of permutationssuch that for all τ∈Γ, τ starts with 1 and(τ) ≤ 1. For example,suppose that k_1, k_2 ≥ 2, p = k_1 + k_2,andΓ_k_1,k_2 = {σ∈ S_p: σ_1=1, σ_k_1+1=2, σ_1 < σ_2< ⋯<σ_k_1 & σ_k_1+1 < σ_k_1+2< ⋯<σ_p}. That is, Γ_k_1,k_2 consists of all permutationsof length p where 1 is in position 1, 2 is in position k_1+1, andconsists of two increasing sequences, one starting at 1 and the other starting at 2. In <cit.>,we proved that for Γ = Γ_k_1,k_2,U_Γ,1(y)=-y, and for n ≥ 2,U_Γ,n(y)= (1-y)U_Γ,n-1(y) -yn-2k_1-1( U_Γ,n-M(y) +y∑_i=1^m-1U_Γ,n-M-i(y) )where m = min{k_1, k_2}, and M = max{k_1,k_2}. Furthermore, in <cit.>, we investigated a new phenomenon that arises when we add the identity permutation 12 … k to the family Γ. For example, if Γ = {1324,123}, then we provedthatU_Γ,1(y)=-y, and for n ≥ 2,U_Γ,n(y) = -yU_Γ,n-1(y) -yU_Γ,n-2(y) + ∑_k=2^(-y)^kC_k-1U_Γ, n-2k(y). When Γ = {1324… p,123… p-1} where p ≥ 5, thenwe proved that U_Γ,1(y)=-y, and for n ≥ 2,U_Γ,n(y) = ∑_k=1^p-2(-y)U_Γ,n-k(y) + ∑_k=1^p-2∑_m=2^⌊n-k/p-2⌋(-y)^mU_Γ, n-k-(m-1)(p-2)(y). While on the surface,the recursions (<ref>) and (<ref>) do notseem to be simpler than the corresponding recursions(<ref>) and (<ref>), they are easier to analyze becauseadding an identity permutation 12 … k to Γ ensuresthat all the bricks in the filled brick tabloids used to interpretn!θ_τ(h_n) have length less than k. Forexample, we were able to prove the following explicit formulafor the polynomials U_{1324,123},n(y). Let Γ = {1324,123}. Then for alln ≥ 0, U_Γ,2n(y) =∑_k=0^n (2k+1)2nn-k/n+k+1(-y)^n+k+1and U_Γ,2n+1(y) =∑_k=0^n 2(k+1)2n+1n-k/n+k+2(-y)^n+k.Another example in <cit.> where we could findan explicit formula is the following. Let Γ_k_1,k_2,s = Γ_k_1,k_2∪{1 ⋯ s(s+1)} for some s ≥max(k_1,k_2). Bach and Remmel showed that U_Γ_2,2,s,1(y)=-y, and for n ≥ 2,U_Γ_2,2,s,n(y) = -yU_Γ_2,2,s,n-1(y) -∑_k=0^s-2((n-k-1) yU_Γ_2,2,s,n-k-2(y)+(n-k-2) y^2 U_Γ_2,2,s,n-k-3(y)). Using these recursions, we proved that U_Γ_2,2,2,2n(y) =∑_i=0^n (2n-1)↓↓_n-i (-y)^n+i U_Γ_2,2,2,2n+1(y) = ∑_i=0^n (2n) ↓↓_n-i (-y)^n+1+iwhere for any x, (x)↓↓_0 =1 and (x)↓↓_k =x(x-2)(x-4) ⋯ (x-2k -2) for k ≥ 1. The two assumptions on Γ that allow the reciprocity method to work are that (A) all τ in Γ start with1 and (B) all τ in Γ have at mostone descent. First, assumption (A) ensures that wecan write _Γ(x,y,t) in the form (<ref>).Second, assumption (B) ensures that the involution I used tosimplify the weighted sum over all filled, labeled brick tabloidsthat equals n!θ_τ(h_n) is actually an involution and toensure that the elements in any brick of a filled, labeledbrick tabloids which is a fixed point of Imust be increasing. Finally, (A) is used again to ensure that the minimalelements in bricks of any fixed point of I areincreasing when read from left to right.The main goal of this paper is to study how we canapply the reciprocity method in the casewhere we no longer insist that all theτ∈Γ have at most one descent. We shallshowthat we can modify the definition of the involution used by Jones and Remmel <cit.> and Bach and Remmel <cit.>to simplify the weighted sum over all filled, labeled brick tabloidsthat equals n!θ_τ(h_n).However, the set of fixed pointsin such cases will be more complicated thanin the case where Γ contains only permutationswith at most one descent in that it will no longer be thecase that for fixed points of the involution,the fillings will be increasing in bricks andthe minimal elements of the brick increase, reading from leftto right.Nevertheless, we shall show that therestill are a number ofcases where we can successfully analyze the fixed pointsto prove that the polynomials U_Γ,n(y) satisfy some simple recursions.In this paper, we shall prove three main theorems.That is, we will compute the generating functions_Γ(t,x,y) whenΓ = {14253,15243}, Γ = {142536},and when Γ = {τ_a} for anya ≥ 2 where τ_a ∈ S_2a isthe permutation such thatτ_1 τ_3 …τ_2a-1 = 12 … a andτ_2 τ_4 …τ_2a= (2a) (2a-1) … (a+1).In each case, the permutations have at least two descents. In <cit.>. we studied the generating functions of the form_τ(t,x,y) where τ is a minimal overlappingpermutation that starts with 1.Here τ∈ S_j is a minimaloverlapping permutation if the smallest j such thatthere exists an ∈ S_n such that () =2 is 2j-1.This means that any two consecutive τ-matches can shareat most one letter. When τ is a minimallyoverlapping permutations, the recursions for U_τ,n(y) are generally muchsimpler than the ones considered in this paper because in each casewe are dealing with permutations which are not minimally overlapping.The main results of this paper are the following theorems. Let Γ = {14253,15243}. Then _Γ(t,x,y)=(1/U_Γ(t,y))^xwhere U_Γ(t,y)=1+∑_n≥1U_Γ,n(y)t^n/n!,with U_Γ,1(y)=-y, and for n ≥ 2,U_Γ,n(y) = (1-y)U_Γ,n-1(y)-y^2(n-3)(U_Γ, n-4(y) +(1-y)(n-5)U_Γ, n-5(y)) -y^3(n-3)(n-5)(n-6)U_Γ,n-6(y). Let C_n = 1/n+12nn be the n-th Catalan number.Let M_n be the n× n matrix whose elements onthe main diagonal equals C_2, whose elements on j-th diagonal abovethe main diagonal are C_3j+2, whose elements on the sub-diagonalare -1, and whose elements in diagonal below the sub-diagonal are 0.Thus, M_k =C_2 C_5 C_8 C_11 ⋯C_3k-4C_3k-1 -1 C_2 C_5 C_8⋯C_3k-7C_3k-4 0 -1 C_2 C_5⋯C_3k-10C_3k-7 0 0 -1 C_2⋯C_3k-13C_3k-10 ⋮ ⋮ ⋮ ⋮⋮ ⋮ 0 0 0 0⋯C_2 C_50 0 0 0⋯-1 C_2 .Let P_k be the matrix obtained from M_k by replacing eachC_m in the last column by C_m-1. Thus, P_k =C_2 C_5 C_8 C_11 ⋯C_3k-4C_3k-2 -1 C_2 C_5 C_8⋯C_3k-7C_3k-5 0 -1 C_2 C_5⋯C_3k-10C_3k-8 0 0 -1 C_2⋯C_3k-13C_3k-11 ⋮ ⋮ ⋮ ⋮⋮ ⋮ 0 0 0 0⋯C_2 C_40 0 0 0⋯-1 C_1 . Let τ = 142536. Then _τ(t,x,y)=(1/U_τ(t,y))^xwhere U_τ(t,y)=1+∑_n≥1U_τ,n(y)t^n/n!,with U_τ,1(y)=-y, and for n ≥ 2,U_τ,n(y) =   (1-y)U_Γ,n-1(y) +∑_k=0^[(n-8)/6]det(M_k+1) y^3k+3U_n-6k-7(y)    + ∑_k=0^[n-6/6]det(P_k+1)(-y^3k+2) [U_τ,n-6k-4(y) +yU_τ,n-6k-5(y) ].For any n ≥ 2, let τ=τ_1 …τ_2a∈ S_2a whereτ_1 τ_3 …τ_2a-1 = 123 … a and τ_2τ_4 …τ_2a = (2a) (2a-1) … (a+1). Then _τ(t,x,y)=(1/U_τ(t,y))^xwhere U_τ(t,y)=1+∑_n≥1U_τ,n(y)t^n/n!,with U_τ,1(y)=-y, and for n ≥ 2,U_τ,n(y)= (1-y)U_τ,n-1(y)-∑_k=0^⌊ (n-2a)/(2a)⌋n-(k+1)a-1(k+1)a-1 y^(k+1)a-1 U_τ_a,n-(2(k+1)a)+1(y)+∑_k=0^⌊ (n-2a-2)/(2a)⌋n-(k+1)a-2(k+1)ay^(k+1)a U_τ_a,n-(2(k+1)a)-1(y).We note that our results allows us to compute _τ(t,x,y) in two cases where τ = τ_1 …τ_6 andτ_1 =1, τ_3 =2, and τ_5 =3.Namely, the case whereτ = 162534 is consider in Theorem <ref> and thethe case where τ = 142536 is a special case ofTheorem <ref>.All suchpermutations have (τ) =2. In fact, the first author in his thesishas computed_τ(t,x,y) in the other 4 cases whereτ = τ_1 …τ_6 andτ_1 =1, τ_3 =2, and τ_5 =3 which we will not presenthere due to lack of space. The outline of this paper is the following. In Section <ref>,we shall provide the necessary background on symmetric functionsfor our applications. In section <ref>,we shall recall the basic reciprocity method of<cit.> and <cit.> in the casewhere the permutations of Γ are allowed to have more thanone descent.In Section <ref>, we shall prove Theorem <ref>. In Section <ref>,we shall prove Theorem <ref>. Finally, in Section<ref>, we shall prove Theorem <ref>. § SYMMETRIC FUNCTIONSIn this section, we give the necessary background onsymmetric functions that will be used in our proofs.A partition of n is a sequence of positive integers= (_1, …,_s) such that0 < _1 ≤⋯≤_s and n=_1+ ⋯ +_s. We shall write λ⊢ n to denote that λ ispartition of n and we let ℓ(λ) denote the number of partsof λ. When a partition of n involves repeated parts,we shall often use exponents in the partition notation to indicatethese repeated parts. For example, we will write(1^2,4^5) for the partition (1,1,4,4,4,4,4). Let Λ denote the ring of symmetric functions in infinitely many variables x_1,x_2, …. Theelementary symmetric function e_n = e_n(x_1,x_2, …)andhomogeneous symmetric function h_n = h_n(x_1,x_2, …) are defined by the generating functions given in (<ref>).For any partition = (_1,…,_ℓ), let e_= e__1⋯e__ℓ and h_= h__1⋯h__ℓ.It is well known that e_0,e_1, … isan algebraically independent set of generators for , and hence,a ring homomorphism θ on Λ can be defined by simply specifying θ(e_n) for all n.If λ =(λ_1, …, λ_k) is a partition of n,then a λ-brick tabloid of shape (n) is a fillingof a rectangle consisting of n cells with bricks of sizesλ_1, …, λ_k in such a way that notwo bricks overlap. For example, Figure<ref> shows the six (1^2,2^2)-brick tabloids ofshape (6).DIMfig1The six (1^2,2^2)-brick tabloids of shape (6). Let ℬ_,n denote the set of -brick tabloidsof shape (n) and let B_,n be the number of -bricktabloids of shape (n).If B ∈ℬ_,n, wewill write B =(b_1, …, b_ℓ()) if the lengths ofthe bricks in B, reading from left to right, areb_1, …, b_ℓ(). For example, the bricktabloid in the top right position in Figure <ref> isdenoted as (2,1,1,2).Eğecioğlu and the second author<cit.> proved that h_n = ∑_⊢ n (-1)^n - ℓ() B_,n  e_.This interpretation of the expansion ofh_n in terms of the e_λs will aid us in describing the coefficients of θ_Γ(H(t))=_Γ(t,y) describedin the next section,which will in turn allow us to compute the coefficients_Γ,n(t,x,y).§ EXTENDING THE RECIPROCITY METHOD Let Γ be the set of permutations that all start with 1 andthere is a k ≥ 1 such that all ∈Γ have() ≤ k and there is at least one τ∈Γsuch that (τ) =k. We want to give a combinatorial interpretation toU_Γ(t,y)=1/_Γ(t,1,y)= 1/1+ ∑_n ≥ 1t^n/n!_Γ,n(1,y)where _Γ,n(1,y) = ∑_∈𝒩ℳ_n(Γ) y^1+().We define a ring homomorphism θ_Γ on the ring of symmetric functions Λ by setting θ_Γ(e_0) = 1 and, for n ≥ 1,θ_Γ(e_n) = (-1)^n/n!_Γ,n(1,y).It then follows that θ_Γ(H(t))= ∑_n ≥ 0θ_Γ(h_n)t^n= 1/θ_τ(E(-t)) = 1/1 + ∑_n ≥ 1 (-t)^n θ_Γ(e_n)= 1/1 + ∑_n ≥ 1t^n/n!_Γ,n(1,y) =U_Γ(t,y).Thus U_Γ,n(y) = n! θ_Γ(h_n). Using (<ref>), we can computen! θ_Γ(h_n)=n! ∑_⊢ n (-1)^n-ℓ()B_,n θ_Γ(e_) =n! ∑_⊢ n (-1)^n-ℓ()∑_(b_1, …,b_ℓ()) ∈ℬ_,n∏_i=1^ℓ()(-1)^b_i/b_i!_Γ,b_i(1,y) = ∑_⊢ n (-1)^ℓ()∑_(b_1, …, b_ℓ()) ∈ℬ_,nnb_1, …, b_ℓ()∏_i=1^ℓ()_Γ,b_i(1,y). To give combinatorial interpretation to the right hand side of (<ref>), we select a brick tabloid B = (b_1, b_2, …, b_ℓ() ) of shape (n) filled with bricks whose sizes induce the partition . We interpret the multinomial coefficient nb_1, …, b_ℓ() as the number of ways to choose an ordered set partition 𝒮 =(S_1, S_2, …, S_ℓ()) of {1,2, …, n} such that |S_i| = b_i, for i = 1, …, ℓ(). For each brick b_i, we then fill the cells of b_i with numbers from S_i such that the entries in the brick reduce to a permutation ^(i) = _1 ⋯_b_i in 𝒩ℳ_b_i(Γ). We label each descent ofthat occurs within each brick as well as the last cell of each brick by y.This accounts for the factor y^(^(i))+1 within each brick. Finally, we use the factor (-1)^ℓ() to change the label of the last cell of each brick from y to -y. We will denote the filled labeled brick tabloid constructed in this way as ⟨ B,𝒮,(^(1), …, ^(ℓ()))⟩. For example, when n = 17, Γ = {1324, 1423, 12345}, andB = (9,3,5,2), consider the ordered set partition 𝒮=(S_1,S_2,S_3,S_4) of {1,2,…, 17} where S_1={2,5,6,9,11,15,16,17,19}, S_2 = {7,8,14}, S_3 = {1,3,10,13,18}, S_4 = {4,12} and the permutations^(1) = 1 2 4 6 5 3 7 9 8 ∈𝒩ℳ_9(Γ), ^(2) = 1 3 2 ∈𝒩ℳ_7(Γ), ^(3) = 5 1 2 4 3 ∈𝒩ℳ_5(Γ), and ^(4) = 2 1 ∈𝒩ℳ_2(Γ). Then the construction of⟨ B,𝒮,(^(1), …, ^(4))⟩ ispictured in Figure <ref>.It is easy to see that we can recoverthe triple ⟨ B, (S_1, …, S_ℓ()), (^(1), …,^(ℓ())) ⟩ from B and the permutation which is obtained by reading the entries in thecells from right to left.We let 𝒪_Γ, n denote the set of all filled labeled brick tabloids created this way. That is, 𝒪_Γ, n consists of all pairs O = (B,) where* B = (b_1, b_2, …, b_ℓ()) is a brick tabloid of shape n,* = _1⋯_n is a permutation in S_n such that there is no Γ-match ofwhichlies entirely in a single brick of B, and * if there is a cell c such that a brick b_i contains both cells c and c+1 and _c > _c+1, then cell c is labeled with a y and the last cell of any brick is labeled with -y. We define the sign of each O to be sgn(O) = (-1)^ℓ(). The weight W(O) of O is defined to be the product of all the labels y used in the brick. For example,the labeled brick tabloid picturedFigure <ref> hasW(O) = y^11 and sgn(O) =(-1)^4 =1. It follows that n!θ_Γ(h_n) = ∑_O ∈𝒪_Γ,n sgn(O) W(O). Next we define a sign-reversing, weight-preserving mapping J_Γ: 𝒪_Γ, n→𝒪_Γ, n asfollows.Let (B,) ∈𝒪_Γ, n whereB=(b_1, …, b_k) and = _1 …_n.Then for any i, we let(b_i) be the element in the left-most cell of b_i and(b_i) be the element in the right-most cell of b_i. Then we read the cells of (B,) from left to right, looking for the first cell c such that eitherCase I.cell c is labeled with a y in some brick b_j andeither (a) j=1 or (b) j > 1 and either(b.1) (b_j-1) < (b_j) or(b.2) (b_j-1) > (b_j) and thereis τ-match contained in the cells of b_j-1 andthe cells b_j that end weakly to the left of cell c forsome τ∈Γ or Case II. cell c is at the end of brick b_i where _c > _c+1 and there is no Γ-match ofthat lies entirely in the cells of the bricks b_i and b_i+1.In Case I, we define J_Γ((B,)) to be the filled labeled brick tabloid obtained from (B,) by breaking the brick b_j that contains cell c into two bricks b_j' and b_j” where b_j' contains the cells of b_j up to and including the cell c while b_j” contains the remaining cells of b_j. In addition, we change the label of cell c from y to -y. In Case II,J_Γ((B,)) is obtained by combining the twobricks b_i and b_i+1into a single brick b and changing the label of cell c from -y to y. If neither case occurs, then we let J_Γ((B,)) = (B,). For example, suppose Γ = {τ} where τ =14253 and(B,) ∈𝒪_Γ,18 pictured atthe top of Figure <ref>.We cannot use cell c=4 to define J_Γ(B,),because if we combined bricks b_1 and b_2,then (9 15 11 16 13) = τ would be a τ-matchcontained in the resulting brick.Similarly,we cannot usecell c=6 to apply the involution because it fails to meetcondition (b.2). In fact the first c for which either Case I orCase II applies is cell c=8 so that J_Γ(B,) isequal to the (B',) pictured on the bottom ofFigure <ref>.We now prove that J_Γ is an involution by showing J_Γ^2 is the identity mapping. Let (B,) ∈𝒪_Γ,n where B=(b_1, …, b_k) and = _1 …_n. The key observation here is that applying the mapping J_Γ to a brick in Case I will produce one in Case II, and vice versa. Suppose the filled, labeled brick tabloid (B,) is in Case I and its image J_Γ((B,)) is obtainedby splitting some brick b_j after cell c into two bricks b_j' and b_j”.There are now two possibilities.(a) c is in the first brick b_1. In this case, c must be the first cell which is labeled with y so that the elements in b_1' will be increasing. Furthermore, since we are assuming there is no Γ-match in the cells of brick b_1 in (B,), there cannot be any Γ-match that involves the cells of bricks b_1' and b_1” in J_Γ((B,)). Hence, when we consider J_Γ((B,)), the first possible cell where we can apply J_Γ will be cell c because we can now combine b_1' and b_1”.Thus,when we apply J_Γ to J_Γ((B,)), we will bein Case II using cell c so that we will recombine bricks b_1' and b_1” into b_1 and replace the label of -y on cell c by y. Hence J_Γ(J_Γ((B,))) =(B,) in this case.(b) c is in brick b_j, where j > 1. Note that our definition of when a cell labeled y can be used in Case I to define J_Γ depends only on the cells and the brick structure to the left of that cell. Hence, we can not use any of the cells labeled y to the left of c to define J_Γ(J_Γ((B,))). Similarly, if we have two bricks b_s and b_s+1 which lie entirely to the left of cell c such that (b_s) = _d > (b_s+1) =_d+1, the criteria to use cell d in the definition of J_Γ on J_Γ((B,)) depends only on the elements in bricks b_s and b_s+1. Thus, the only cell d which we could possibly use to define J_Γ on J_Γ((B,)) that lies to the left of c is the last cell of b_j-1. However, our conditions that either (b_j-1) < (b_j) = (b_j') or(b_j-1) > (b_j) = (b_j') with a Γ-match contained in the cells of b_j-1 and b_j' force the first cell that can be used to define J_Γ on J_Γ((B,)) to be cell c. Thus, when weapply J_Γ to J_Γ((B,)), we will be in Case II using cell c and we will recombine bricks b_j' and b_j” into b_j and replace the label of -y on cell c by y. Thus J_Γ(J_Γ((B,))) =(B,) in this case.Suppose (B,) is in Case II and we define J_Γ((B,)) at cell c, where c is last cell of b_j and _c > _c+1. Then by the same arguments that we used in Case I, there can be no cell labeled y to the left of this cell c in either (B,) or J(B,) whichcan be used to define the involution J_Γ. This follows from the fact that the brick structure before cell c is unchanged between (B,) and J(B,). Similarly, there can be no two bricks that lie entirely to the left of cell c in J_Γ((B,)) that can be combined under J_Γ. Thus, the first cell that we can use to define J_Γ to J_Γ((B,)) is cell c and it is easy to check that it satisfies the conditions of Case I.Thus, when we apply J_Γ to J_Γ((B,)),we will be in Case I using cell c andwe will combine bricks b_j and b_j+1 into a single brick b and replaced the label on cell c by y. Then it is easy to see that when applying J_Γ to J_Γ((B,)), we will split b back into bricks b_j and b_j+1 and change the label on cell c back to -y. ThusJ_Γ(J_Γ((B,))) =(B,) in this case. Hence J_Γ is an involution. It is clear that if J_Γ(B,) ≠ (B,),then sgn(B,)W(B,) = -sgn(J_Γ(B,))W(J_Γ(B,)). Thus, it follows from (<ref>) that U_Γ,n(y) = n!θ_Γ(h_n) =∑_O ∈𝒪_Γ,nO W(O) = ∑_O ∈𝒪_Γ,n, J_Γ(O) =OO W(O).Thus, to compute U_Γ,n(y), we must analyze the fixed points of J_Γ. Our next lemma characterizes the fixed points of J_Γ.Let B=(b_1, …, b_k) bea brick tabloid of shape (n) and = _1 …_n ∈ S_n.Then (B,) is a fixed point of J_Γ if and only if it satisfies the following properties:(a) if i=1 or i > 1 and (b_i-1) < (b_i), then b_i can have no cell labeled y so thatmust be increasing in b_i,(b) if i > 1 and _e = (b_i-1) > (b_i)=_e+1, then there must be a Γ-match contained in the cells of b_i-1 and b_i which must necessarily involve _e and _e+1 and there can be at most k-1 cells labeled y in b_i, and(c) if Γ has the property that, for all τ∈Γ such that (τ) = j ≥ 1, the bottom elements [Ifis a permutation with _i > _i+1, i.e. there is a descent inat position i, then we shall refer to _i+1 as the bottom element of this descent.] of the descents in τ are 2, …, j+1,when reading from left to right,then first(b_1) < first(b_2) < ⋯ < first(b_k).Suppose (B,) is a fixed point of J_Γ. Then it must be the case that in (B,), there is no cell c to which eitherCase I or Case II applies. That is, when attempting to apply the involution J_Γ to (B,), we cannot split any brick at a cell labeled yandwe cannot combine two consecutive bricks where the last cell ofthe first brick is larger than the first cell of the second brick.For (a), note that if there is a cell labeled y in b_i andc is the left-most cell of b_i labeled with y, then c satisfies the conditions of Case I. Thus, there can be no cell labeled y in b_i.For (b), note that if there is no Γ-match contained in the cells of b_i-1 and b_i, then e satisfies the conditions of Case II.Thus, there must be a Γ-match contained in the cells of b_i-1 and b_i. If there are k or more cells labeled y in b_i, then let c be the k^th cell, reading from left to right, which is labeled with y. Then we know there is τ-match contained in the cells of b_i-1 and b_i which must necessarily involve _e and _e+1 for some τ∈Γ. But this τ-match must end weakly before cell c since otherwise τ would have at least k+1 descents. Thus c would satisfy the conditions to apply Case I of ourinvolution. Hence there can be no such c which means thateach such brick can contain at most k-1 descents.To prove (c), suppose for a contradiction thatthere exist two consecutive bricks b_i and b_i+1such that _e = first(b_i) > first(b_i+1) =_f. There are two cases.Case A. is increasing in b_i. Then _f-1 = last(b_i).If _f-1 < _f, then we know that _e ≤_f-1 < _f which contradicts our choice of _e and _f. Thus it must be the case that _f-1 > _f.But then there is τ∈Γ such that (τ) =j ≥ 1 and there is a τ-match in the cells of b_i and b_i+1 involving the _f-1 and _f. By our assumptions, _f can only play the role of 2 in such a τ-match. Hence there must be some _g with e ≤ g ≤ f-2 which plays the role of 1 in this τ-match. But then we would have _e ≤_g< _f which contradicts our choice of _e and _f. Thuscannot be increasing in b_i.Case B.is not increasing in b_i. In this case, by part (a), we know that it must be the case that _e-1 = (b_i-1) > _e = (b_i) and, by (b), there is τ∈Γ such that (τ) =j ≥ 1 and there is a τ-match in the cells of b_i-1 and b_i involving the cells _e-1 and _e. Call this τ-match α and suppose that cell h is the bottom element of the last descent in α. It cannot be that _e =_h.That is, there can be no cell labeled y that occurs after cell h in b_i since otherwise the left-most such cell c would satisfy the conditions of Case I of the definition of J_Γ. But this would mean thatis increasing in b_i starting at _h so that if _e =_h, thenwould be increasing in b_i which contradicts our assumption in this case. Thus there is some 2 ≤ i ≤ j such that _e plays the role of i in the τ-match α and _h plays the role of j+1 in the τ-match α. But this means that _e is the smallest element in brick b_i. That is, let _c be the smallest element inb_i. If _e ≠_c, then _c must be the bottom of some descent in b_i which implies that c ≤ h. But then _c is part of the τ-match α which means that _c must be playing the role of one of i+1, …, j+1 in the τ-match α and _e is playing the role of i in the τ-match α which is impossible if _e ≠_c. It follows that _e ≤_f-1. Hence, it can not be that case that _f-1 < _f since otherwise _e < _f. Thus it must be the case that _f-1 > _f. But this means that there exists some δ∈Γ such that (δ) =p ≥ 1 and there is a δ-match in the cells of b_i and b_i+1 involving the _f-1 and _f.Call this δ-match β. By assumption, the bottom elements of the descents in δ are 2,3, …, p+1 so that _f must be playing the role of 2,3, … ,p+1 in the δ-match β. Let _g be the element that plays the role of 1 in the δ-match β. _g must be in b_i since δ must start with 1. But then we would have that _e ≤_g < _f since _e is the smallest element in b_i.Thus, both Case A and Case B are impossible.Hence we must have that first(b_1) < first(b_2) < ⋯ < first(b_k). We note that if condition (3) of the Lemma fails,it may be that the first elementsof the bricks do not form an increasing sequence. Forexample, it is easy to check that if Γ = {15342},then the (B,) pictured in Figure <ref> issuch a fixed point of J_Γ. § THE PROOF OF THEOREM <REF>In this section, we shall prove Theorem <ref> which is the simplest caseof our three examples. For convenience,we first restate the statement of Theorem <ref>.Let Γ = {14253,15243}. Then _Γ(t,x,y)=(1/U_Γ(t,y))^xwhere U_Γ(t,y)=1+∑_n≥1U_Γ,n(y)t^n/n!,with U_Γ,1(y)=-y, and for n ≥ 2,U_Γ,n(y) = (1-y)U_Γ,n-1(y)-y^2(n-3)(U_Γ, n-4(y) +(1-y)(n-5)U_Γ, n-5(y)) -y^3(n-3)(n-5)(n-6)U_Γ,n-6(y). Proof. Let Γ = {14253,15243}, we need to show that the polynomialsU_Γ,n (y) = ∑_O ∈𝒪_Γ,n, J_Γ(O) =OO W(O)satisfy the following properties: * U_τ,1(y)=-y, and * for n ≥ 2,U_Γ,n(y) = (1-y)U_Γ,n-1(y)-y^2(n-3)(U_Γ, n-4(y) +(1-y)(n-5)U_Γ, n-5(y)) -y^3(n-3)(n-5)(n-6)U_Γ,n-6(y). It is easy to see when n=1, the only fixed point comes frombrick tabloid that has a single brick of size 1 which contains 1 andthe label on cell 1 is -y. ThusU_τ,1(y)=-y.For n ≥ 2, let O=(B,) be a fixed point of J_Γ whereB=(b_1, …, b_k) and =_1 ⋯_n.First we show that 1 must be in the first cell of B.That is, if1 = _c where c > 1, then _c-1 > _c. We claimthat whenever we have a descent _i > _i+1 in , then_i and _i+1 must be part of a Γ-match in . That is,it is either the case that (i) there are bricks b_s and b_s+1 such that_i is the last cell of b_s and _i+1is the first cell of b_s+1 or (ii) there is abrick b_s that contains both _i and _i+1. In case (i), condition3 of Lemma <ref> ensures that _i and _i+1 mustbe part of Γ-match. In case (ii), we know that cell i islabeled with y.It follows from condition (2) of Lemma <ref>that it can not be that either s =1 so that b_s =b_1 or thats > 1 and last(b_s-1) < first(b_s) because those conditions forcethatis increasing in b_s. Thus we must have that s > 1 andlast(b_s-1) > first(b_s).Since (B,) is a fixed point of J_Γ, it cannotbe that there is a Γ-match in which includes last(b_s-1)andfirst(b_s) that ends weakly to the left of _i becausethen cell i would satisfy Case I of our definition of J_Γ and,hence, (B,) would not be a fixed point of J_Γ.Thus the Γ-match which includes last(b_s-1)andfirst(b_s) must involve _i and _i+1.However, there can be no Γ-match that involves _c-1 and_c since _c=1 can only play the role of 1 in a Γ-matchand each element of Γ starts with 1. Thus we must have _1 =1.Next we claim that 2 must be in either cell 2 or cell 3 in O.For a contradiction, assume that 2 is in cell c for c > 3.Then once again _c-1 > _c so that there must be a Γ-matchinthat involves the two cells c-1 and c in (B,).However, In this case, the number which is in cell c-2 must be greater than_c so that the only possible Γ-match that involves 2 must start from cell c where 2 plays the role of 1 in the match. Thus there is no Γ-match in that involves _c-1 and _c.We now havehave two cases. Case 1. 2 is in cell 2 of O. In this case there are two possibilities, namely, either (i) 1 and 2 are both in the first brick b_1 of (B,) or (ii) brick b_1 is a single cell filled with 1, and 2 is in the first cell of the second brick b_2 of O.In either case, we know that 1 is not part of aΓ-match in . So if we remove cell 1 from O and subtract 1 from the elements in the remaining cells, we will obtain a fixed point O' of J_Γ in 𝒪_Γ,n-1. Moreover, we can create a fixed point O=(B,) ∈𝒪_n of J_Γ satisfying the three conditions of Lemma <ref> where _2 =2 by starting with a fixed point (B',') ∈𝒪_Γ,n-1 of J_Γ, where B' =(b_1', …, b_r') and ' =_1' ⋯_n-1', and then letting = 1 (_1'+1) ⋯ (_n-1' +1), and settingB = (1,b_1', …, b_r') or setting B = (1+b_1', …, b_r').It follows that fixed points in Case 1 will contribute (1-y)U_Γ,n-1(y) toU_Γ,n(y). Case 2. 2 is in cell 3 of O=(B,).Since there is no decrease within the first brick b_1 of O=(B,), it must be the case that 2 is in the first cell of brick b_2 and there must be either a 14253-match or a 15243-match that involves the cells of the first two bricks. Therefore, we know that brick b_2 has at least 3 cells. In addition, we claim that 3 is in cell 5 of O since otherwise, 3 must be in some cell c for c > 6 and there must be a Γ-match between the two cells c-1 and c in O. By the previous argument, we can see that if 3 is too far away from 1 and 2, then it must play the role of 1 in any match that involves cell c. Thus, the only possible Γ-match that contains cell c must also start at c and can never involve both cells c-1 and c. Also, 3 cannot be in cell 2 nor 4 in O since both _2 and _4 are greater than 3, due to the Γ-match starting from cell 1. We now have two subcases depending on whether or not there is a Γ-match in O starting at cell 3. Subcase 2.a. There is no Γ-match in O starting at cell 3.In this case, we first choose a number x to fill in cell 2 of O. There are n-3 choices for x.For each choice of _2 =x, we let d be the smallest of the remaining numbers, that is, d = min( {1,2, …,n} - {1,2,3,_2}). We claim that d must be either in cell 4 or cell 6 in (B,). First, d cannotbe in cell 7 since otherwise there would be a Γ-match instartingat cell 3.Next d cannot be a cell c where c >7 since otherwise_c-1 >_c =d which means that there must be a Γ-match inwhichincludes both _c-1 and _c.However, in the case, we wouldalso have _c-2> _c which implies the only role that _c can playin a Γ-match is 1. This leaves us with three possibilities which are pictured in Figure<ref>. That is, either (i) d is in cell 4, (ii) d is incell 6 and is in brick b_2 or (iii) d is in cell 6, but is thefirst element of brick b_3. In case (i), we can remove that first fourcells from B, reduce the remaining elements ofto obtain apermutation α∈ S_n-4, and let B'=(b_2-2,b_3, …,b_k) toobtain a fixed point (B',α) of J_Γ of size n-4.Such fixed points will contribute -y^2U_Γ,n-4(y) to U_Γ,n(y).In case (ii), we have (n-5) ways to choose the element z incell 4. Then we can remove that first five cells cells from B, reduce the remaining elements ofto obtain apermutation α∈ S_n-5, and let B'=(b_2-3,b_3, …,b_k) toobtain a fixed point (B',α) of J_Γ of size n-5.Such fixed points will contribute -y^2U_Γ,n-5(y) to U_Γ,n(y).In case (iii), we have (n-5) ways to choose the element z incell 4. Then we can remove that first five cells cells from B, reduce the remaining elements ofto obtain apermutation α∈ S_n-5, and let B'=(b_2-3,b_3, …,b_k) toobtain a fixed point (B',α) of J_Γ of size n-5.Such fixed points will contribute y^3U_Γ,n-5(y) to U_Γ,n(y).Therefore, the total contribution of the fixed points from Subcase 2.a. is-y^2(n-3)( U_Γ, n-4(y) +(1-y)(n-5)U_Γ, n-5(y)).Subcase 2.b. There is a Γ-match in O starting at cell 3. In this case, we first choose a number x to fill in cell 2 of O. There are n-3 choices for x. For each choice of _2, let d = min( {1, …,n}- {1,2,3,_2}).Then we claim that d must be in cell 7.That is, we can argue as inSubcase 2a that it cannot be that d in cell c for c >7.Butsince there is a Γ-match starting at cell 3 we know_4 > _7 and _6 > _7 so that d cannot be in cells 4 or 6.We then have (n-5)(n-6) ways to choose _4 =z and _6 =a. Next by condition (b) of Lemma <ref>, weknow that each brick in b in B can contain at most one descent. Sincewe know that b_2 must have size at least 3 because there is a Γ-match instarting at cell 1 which is contained in b_1 and b_2, this means that eitherb_2 =3 or b_2=4. We claim that b_2 is of size 4.That is,if b_2 =3, then either (I) a > d are in b_3 or (II) brick b_3 containsa single cell containing a and d is the first cell of b_4.Case (I)cannot happen because then last(b_2) =3 < first(b_3) =a whichimplies that the elements in b_3 must be increasing by condition (a) of Lemma <ref>.Case (II) cannot happenbecause thatlast(b_3) =a > first(b_4) =d which impliesthere must be a Γ-match contained in the cells of b_3 and b_4which involves both _6 =a and _7 =d which is impossible sincea > d.Thus we are in the situation pictured in Figure <ref>.Then we can remove that first six cells cells from B, reduce the remaining elements ofto obtain apermutation α∈ S_n-6, and let B'=(b_3, …,b_k) toobtain a fixed point (B',α) of J_Γ of size n-6.Such fixed points will contribute (n-3)(n-5)(n-6)y^3U_Γ,n-6(y) to U_Γ,n(y). In total, we obtain the recursion for U_Γ,n(y) as follows.U_Γ,n(y) = (1-y)U_Γ,n-1(y)-y^2(n-3)(U_Γ, n-4(y) +(1-y)(n-5)U_Γ, n-5(y))+y^3(n-3)(n-5)(n-6)U_Γ,n-6(y). This proves Theorem <ref>.Using Theorem <ref>, we computed the initial values ofthe U_Γ,n(y)s which are given in Table 1.Using these initial values ofthe U_Γ,n(y)s, one can then compute the initial values ofNM_Γ,n(x,y) which are given in Table 2.§ THE GENERATING FUNCTION U_142536(T,Y).In this section, we shall study the generating functionU_τ(t,y) where τ = 142536. We let J_τ denote theinvolution J_Γ from Section 3 where Γ ={τ}. We claim that the polynomials U_τ,n (y) = ∑_O ∈𝒪_τ,n, J_τ(O) =OO W(O) satisfy the following properties: * U_τ,1(y)=-y, and * for n ≥ 2,U_τ,n(y) =   (1-y)U_Γ,n-1(y) +∑_k=0^[(n-8)/6]det(M_k+1) y^3k+3U_n-6k-7(y)    + ∑_k=0^[(n-6)/6]det(P_k+1)(-y^3k+2) [U_τ,n-6k-4(y) +yU_τ,n-6k-5(y) ].It is easy to see when n=1, the only fixed point comes frombrick tabloid that has a single brick of size 1 which contains 1 andthe label on cell 1 is -y. ThusU_τ,1(y)=-y.For n ≥ 2, let O=(B,) be a fixed point of I_Γ where B=(b_1, …, b_k) and =_1 ⋯_n.First we show that 1 must be in the first cell of B.That is, if1 = _c where c > 1, then _c-1 > _c. We claimthat whenever we have a descent _i > _i+1 in , then_i and _i+1 must be part of a τ-match in . That is,it is either the case that (i) there are bricks b_s and b_s+1 such that_i is the last cell of b_s and _i+1is the first cell of b_s+1 or (ii) there is abrick b_s that contains both _i and _i+1. In case (i), condition3 of Lemma <ref> ensures that _i and _i+1 mustbe part of τ-match. In case (ii), we know that cell i islabeled with y.It follows from condition (2) of Lemma <ref>that it can not be that either s =1 so that b_s =b_1 or thats > 1 and last(b_s-1) < first(b_s) because those conditions forcethatis increasing in b_s. Thus we must have that s > 1 andlast(b_s-1) > first(b_s).Since (B,) is a fixed point of J_τ, it cannotbe that there is a τ-match in which includes last(b_s-1)andfirst(b_s) that ends weakly to the left of _i becausethen cell i would satisfy Case I of our definition of J_τ and,hence, (B,) would not be a fixed point of J_τ.Thus the τ-match which includes last(b_s-1)andfirst(b_s) must involve _i and _i+1.However, there can be no τ-match that involves _c-1 and_c since _c=1 can only play the role of 1 in τ-matchand τ starts with 1. Thus we must have _1 =1.Next we claim that 2 must be in either cell 2 or cell 3 in O.For a contradiction, assume that 2 is in cell c for c > 3.Then once again _c-1 > _c so that there must be a τ-matchinthat involves the two cells c-1 and c in (B,). However, since 2 is too far from 1 in B, the only possible 142536-match that involves 2 must start from cell c where 2 plays the role of 1 in the match. We thenhave two cases. Case 1. 2 is in cell 2 of O. In this case, there are two possibilities, namely, either (i) 1 and 2 are both in the first brick b_1 of (B,) or (ii) brick b_1 is a single cell filled with 1 and 2 is in the first cell of the second brick b_2 of (B,).In either case, we know that 1 is not part of a τ-match in (B,). So if we remove cell 1 from (B,) and subtract 1 from the elements in the remaining cells, we will obtain a fixed point (B',') of J_Γ in 𝒪_Γ,n-1. Moreover, we can create a fixed point O=(B,) ∈𝒪_n satisfying the three conditions of Lemma <ref> where _2 =2 by starting with a fixed point (B',') ∈𝒪_Γ,n-1 of J_Γ, where B' =(b_1', …, b_r') and ' =_1' ⋯_n-1', and then letting = 1 (_1'+1) ⋯ (_n-1' +1), and settingB = (1,b_1', …, b_r') or setting B = (1+b_1', …, b_r'). It follows that fixed points in Case 1 will contribute(1-y)U_Γ,n-1(y) to U_Γ,n(y).Case 2. 2 is in cell 3 of O=(B,).Since there is no decrease within the first brick b_1 of O=(B,), it must be the case that 2 is in the first cell of brick b_2 and there must be a 142536-match that involves the cells of the first two bricks. Therefore, we know that brick b_2 has at least 4 cells. To analyze this case, it will be useful to picture O=(B,) as a 2-line array A(O) where the elements in the i-th column are _2i-1and _2i reading from bottom to top.In A(O), imaginethe we draw an directed arrow from the cell containing i to the cellcontaining i+1.Then it is easy to see that a τ-matchcorrespond to block of points as pictured in Figure <ref> Now imagine that A(0) starts with series of τ-matches startingat positions 1,3,5, …. We have pictured this situation atthe top of Figure <ref>. Now consider the brickstructure of O=(B,). Since the elements of b_1 must beincreasing and _2 > _3, it must be the case that b_1 =2 andb_2 ≥ 4. We claim that b_2 =4 because if b_2 > 4,then _6 < _7 would be a descent in b_2.Thus cell 6 would be labeled with a y. The τ-matchstarting at cell 1 ends a cell 6 so that cell 6 would satisfyCase I of our definition of J_τ whichcontracts that the fact thatO=(B,) is a fixed point of J_τ.Now the fact that _6 > _7 implies that b_3 ≥ 2 sincethere must be a τ-match that involves _6 and _7. Now ifthere is a τ-match starting at cell 7, then we can seethat _8 > _9.It cannot be that _8 and _9 areboth in b_3 because it would follow that cell 8 would be labeledwith a y and the τ-match starting at _3 wouldend at cell 8. Thus cell 8would be in Case I of our definition of J_τwhich contracts that the fact thatO=(B,) is a fixed point of J_τ.Thus it must be the case that b_3 =2. But the τ-matchstarting at cell 7 forces _8 > _9 so that thereis a decrease between last(b_3) and first(b_4) whichimplies that there is τ contained in b_3 and b_4, which then meansthat b_4 ≥ 4. Now if there is a τ-matches startingat _9, then it must be the case that _12 > _13. Hence,it cannot be b_4 > 4 since otherwise cell 12 is labeled with a y.Since the τ-match starting a cell 7 ends at cell 12, then cell 12 would be in Case I of our definition of J_τwhich contracts that the fact thatO=(B,) is a fixed point of J_τ.Thus it must be the case that b_4 =4.We can continueto reason in this way to conclude that ifthere are τ-matches starting at cells 1,3,7,9, …, 6k+1,6k+3,then b_2i-1 =2 for i =1, ,2k+1 and b_2i=4 for i =1,…, 2k. Similarly, if thereare τ-matches starting at cells1,3,7,9, …, 6k+1 but no τ-match starting at cell 6k+3,then b_2i-1 =2 for i =1, ,2k and b_2i=4 for i =1,…, 2k-1 and b_2k≥ 4.Note that our arguments above did not use the fact that therewere τ-matches starting at cells 5, 11, ….Indeed,these matches are not necessary to force the brick structure describedabove. For example, suppose that there were no τ-matchstarting at cell 5 but there where τ-matches starting atcell 7.We have pictured this situation on the secondline of Figure <ref> where we have writtenτ below the position corresponding to cell 5 to indicatethat there is not a τ-match starting a cell 5.Then one can from the diagram pictured in the second lineof Figure <ref>, that it must be the case that_6 < _9. It follows that if one looks at the requirementsonto start with such a series of τ-matches, then must be a linear extension of poset whose Hasse diagram ispictured at the bottom of Figure <ref>. There are now two cases depending on where the sequenceof τ-matches starting at positions 1,3,7,9, … ends.Case 2.1.There are τ-matches instartingat positions 1,3,7,9, … , 6k+3, but there is no τ-matchstarting at position 6k+7. This situation is picturedin Figure <ref> in the case where k=2. In this case, we claim that {_1, …, _6k+8} = {1, 2, …, 6k+8}.If not, then i be the least element in{1, 2, …, 6k+8} -{_1, …, _6k+8}. The question thenbecomes for which j is _j =i. It easy to see from the diagram atthe top ofFigure <ref>, that _6k+8 > _r forr =1, …, 6k+7. This implies that _6k+8≥6k+8. But sincei ∈{1, 2, …, 6k+8} -{_1, …, _6k+8}, it mustbe the case that _6k+8 > 6k+8 ≥ i. We claim that j cannot equal 6k+9.That is, ifi = 6k+9, then _6k+8 > _6k+9.It cannot bethat _6k+8 and _6k+9 are in brick b_2k+3 becausethen _6k+8 is labeled with y and there is a τ-matchcontained in bricks b_2k+2 and b_2k+3 that ends beforecell 6k+8 which means that cell 6k+8 satisfies Case 1 of ourdefinition of J_τ which violates our assumption that (B,) isfixed point of J_τ.If _6k+9 starts brickb_2k+4, then brick b_2k+3 must be of size 2 andthere must be a τ-match contained inbricks b_2k+3 and b_2k+4 that involves_6k+8 and _6k+9.But since _2k+8> _2k+9, that τ-match can only start at cell 6k+7which violates our assumption in this case. Next we claim that j cannot be ≥ 6k+10.That is, ifj ≥6k+10, then both _j-2 and _j-1 are greater than_j =i. Thus_j-1 and _j must be part of τ-match in .But then the elements in two cells before cell j are bigger thanthat in cell j which means that the onlyrole that _j can play in a τ-match is 1. Thus there can be no τ-match that includes_j-1 and _j.Let α be the permutationthat is obtained fromby removing the elements 1, …, 6k+7 andsubtracting 6k+7 from the remaining elements. Let B' be thebrick structure (b_2k+3-1,b_2k+4,…, b_k). Then it is easy tosee that (B',α) is a fixed point of J_τ is sizen-6k-7. Vice versa, suppose westart with a fixed point (B',α) of J_τ whose sizen-6k-7 whereB' =(d_1,d_2, …, d_s). Then we can obtaina fixed point (B,) of size n whichhas τ-matches instartingat positions 1,3,7,9, … , 6k+3, but no τ-matchstarting at position 6k+7 by letting _1 …_6k+7 be anypermutation of 1, …, 6k+7 which is a linear extension ofthe poset whose Hasse diagram is pictured at the bottom ofFigure <ref> and letting _6k+8…_n be the sequence thatresults by adding6k+7 to each element of α.Then letB=(b_1, …, b_2k+2, d_1+1,d_2, …, d_s) where b_2i+1 =2 fori =0, …, k and b_2i=4 for i =1, …, k+1.It follows that contribution to U_τ,n(y) from the fixed points in Case 2.1 equal ∑_k=0^⌊n-8/6⌋G_6k+7 y^3k+3U_τ,n-6k-7,where G_6k+7 is the number of linear extensions of the poset picturedat the bottom of Figure <ref> of size 6k+7.Next we want to compute the number of linear extensions of G_6k+7.It is easy to see that the left-most two elements at the bottom ofthe Hasse diagram of G_6k+7 must be first two elements of the linearextension and the right-most element at the top of the Hasse diagram must bethe largest element in any linear extension ofG_6k+7. Thus the number of linear extensions of G̅_6k+4 whichis the Hasse diagram ofG_6k+7 with those three elements removed, equalsthe number of linear extension of G_6k+7. We have pictured the Hasse diagrams of G̅_4, G̅_10 and G̅_16 inFigure <ref>. Now let A_0=1 andA_k+1 be the number of linear extensions of G̅_6k+4 for k ≥ 0.It is easy to see that A_1 =2.There is a natural recursion satisfiedby the A_k, namely, for k > 1, A_k+1= ∑_j=0^k C_2+3j A_k-jwhere C_n = 1/n+12nn is the n-th Catalan number.First, consider the number of linear extensions of the Hasse diagramof the poset D_n with n columns of the type pictured in Figure <ref>.It is easy to see that this is the number of standard tableaux ofshape (n^2) which is well known to equal to C_n.Next if we look at the Hasse diagram of G̅_6k+4 it is easy to seethat there are no relation that is forced betweenthe elements in columns 3i for i =1, …, k.Now supposethat we partition the set of linear extensions ofG̅_6k+4 bysaying the bottom element in column 3i is less than the top elementin column 3i for i =1, …, j and the top element of column3j+3 is less than the bottom elements of column 3j+3.Then we will have a situation aspictured in Figure <ref> in the case where k =6 and j=2.One can see that when one straightens out the resulting Hasse diagram, itstarts with the Hasse diagram of D_2+3j and all those elements must beless than the elements in the top part of Hasse diagram which is a copy ofthe Hasse diagram of G̅_6(k-j-1)+4. Now consider the determinant of the n × n matrix M_n whose elementson the main diagonal are C_2, the elements on the j-diagonal abovethe main are C_2+3j for j ≥ 1, the elements on the sub-diagonal are -1, andthe elements below the sub-diagonal are 0. For example we have picturedin M_7 in Figure <ref>. It is then easy to see thatdet(M_1) = C_2 =2. For n >1 if we expand the determinant by minorsabout the first row, then we see that we have the recursiondet(M_k)= ∑_j=0^k-1 C_2+3jdet(M_k-j-1),where we set det(M_0)=1.For example, suppose that we expand the determinant M_7 picturedin Figure <ref> about the element ofC_8 in the first row. Then in the next two rows, we are forced toexpand about the -1's.It is easy to see that the total sign ofthese expansion is always +1 so that in this case, we would get acontribution ofC_8 det(M_4) to det(M_7). Thus it follows that A_n = det(M_n) for all n.Hence the contribution to U_τ,n from the fixed points in Case 1 equals ∑_k=0^⌊n-8/6⌋det(M_k+1) y^3k+3U_τ,n-6k-7. Case 2.2There are τ-matches instartingat positions 1,3,7,9, … , 6k+1, but there is no τ-matchstarting at position 6k+3. This situation is picturedin Figure <ref> in the case where k=3.In this case, we claim that {_1, …, _6k+5} = {1, 2, …, 6k+5}.If not, then let i be the least element in{1, 2, …, 6k+5} -{_1, …, _6k+5}. The question thenbecomes for which j is _j =i.It easy to see from the diagram atthe top ofFigure <ref>, that _6k+6 > _r forr =1, …, 6k+5 and that _6k+5 > _r forr =1, …, 6k+5. This implies that _6k+5≥ 6k+5, but sincei ∈{1, 2, …, 6k+5} -{_1, …, _6k+5}, it followsthat 6k+5 < _6k+5 < _6k+6. It cannot be that i =_6k+7 because then _6k+6>_6k+7.Note that _6k+3,_6k+4,_6k+5,_6k+6 are elementsof brick b_2k+2. If _6k+7 was also and element ofbrick b_2k+2, then _6k+6 would be marked with a y and thereis a τ-match contained in bricks b_2k+1 and b_2k+2 thatends at cell 6k+6 so that we could apply Case 1 of the involutionJ_τ at cell 6k+6, which violates our assumptionthat (B,) was a fixed point of J_τ.If _6k+7 startsbrick b_2k+3, then there must be a τ-match that involves _6k+6 and _6k+7 and is contained in bricks b_2k+2 and b_2k+3. Since we are assuming that there is no τ-match cannot starting at _6k+3, it must be the case that thereis a τ-match starting at_6k+5.But then we have that situation pictured in Figure<ref>. In Figure <ref>, the dark arrows are forced by the τ-matches startingat _6k+1 and _6k+5.However the top two elementsin brick b_2k+2 are _6k+5 and _6k+6, which are bothgreater than i. This means that the dotted arrow is forced which impliesthat there is a τ-match starting at cell _6k+3. Finally, it cannot be the case that j > 6k+7, because thenit must be the case that _j-1 > _j so that _j-1 and_j must be part of a τ-match in . But in this situation, theelements 1, …, i-1 lie in cells that are more than 2 cells awayfrom the cell containing i.This means that in any τ-match in containing the element i, i can only play the role of 1 in that τ-match.Thus, therecould notbe a τ-match containing _j-1 and _j.Next, consider the possible j such that _j = 6k+6.It cannotbe that j > 6k+7, because thenit must be the case that _j-1 > _j so that _j-1 and_j must be part of a τ-match in . But in this situation, theelements 1, …, 6k+5 lie in cells that are more than 2 cells awayfrom the cell containing 6k+6.This means that in any τ-match containing theelement 6k+6 in , 6k+6 can only play the role of 1 in that τ-match.Thus therecould not be aτ-match incontaining _j-1 and _j. It followsthat 6k+6 = _6k+6 or _6k+7. Let α be the permutationthat is obtained fromby removing the elements 1, …, 6k+4, settingα_1=1,and letting α_2 …,α_n-(6k+4) be the result ofsubtracting 6k+5 from _6k+6…_n.Let B' be thebrick structure (b_2k+2-2,b_2k+3,…, b_k). Then it is easy tosee that (B',α) is a fixed point of J_τ is sizen-6k-4 that starts with a brick of size at least 2. Vice versa, suppose westart with a fixed point (B',α) of J_τ whose sizen-6k-4 that starts with a brick of size at least 2 whereB' =(d_1,d_2, …, d_s). Then we can obtaina fixed point (B,) of size n whichhas τ-matches instartingat positions 1,3,7,9, … , 6k+1, but no τ-matchstarting at position 6k+3, by letting _1 …_6k+5 be anypermutation of 1, …, 6k+5 which is a linear extension ofthe poset whose Hasse diagram is pictured at the bottom ofFigure <ref> and letting _6k+6…_n be the sequence thatresults by adding6k+5 to each element of α_2 …α_n-(6k+4).We letB=(b_1, …, b_2k+1, d_1+2,d_2, …, d_s) where b_2i+1 =2 fori =0, …, k and b_2k=4 for i =1, …, k.Note that for any n, our arguments above show thatthe only fixed points (D,γ) of J_τ of size n where D=(d_1, …,d_k) and = _1 …_n which do not start with a brick of size at least 2 arethe ones that start with a brick b_1=1 where _1 =1 and _2 =2. Clearlysuch fixed points are counted by -yU_n-1,y because d_1 would have weight -yand ((d_2, …, d_k),(_2-1) (_3-1) … (_n-1)) could be anyfixed point of J_τ of size n-1. It follows that sum of the weights ofall fixed points of J_τ of size n which start with a brick of size at least 2 isequal to U_τ,n-(-yU_n-1,τ) =U_τ,n+yU_n-1,τ. It follows that contribution to U_τ,n from the fixed points in Case 2.2 equal -∑_k=0^⌊n-6/6⌋G_6k+4y^3k+2(U_τ,n-6k-4+yU_τ,n-6k-5),where G_6k+4 is the number of linear extensions of the poset picturedat the bottom of Figure <ref> of size 6k+4.Next we want to compute the number of linear extensions of G_6k+4.It is easy to see that the left-most two elements at the bottom ofthe Hasse diagram of G_6k+4 must be first two elements of the linearextension. Thus the number of linear extensions of G̅_6k+2 whichis the Hasse diagram ofG_6k+4 with those two elements removed, equalsthe number of linear extension of G_6k+4.We have pictured the Hasse diagrams of G̅_2, G̅_8 and G̅_14 inFigure <ref>. Now let B_0=1 andB_k+1 be the number of linear extensions of G̅_6k+2 for k ≥ 0.It is easy to see that B_1 =1.Again there is a natural recursion satisfiedby the B_ks, namely, for k > 1, B_k+1=C_3k+1 +∑_j=0^k-1 C_2+3j B_k-j-1,where C_n = 1/n+12nn is the n-th Catalan number. As in the case of the posets G̅_6k+4,there is no relations that is forced between the elements ofthe elements in columns 3i for i =1, …, k.Now supposethat we partition the set of linear extensions ofG̅_6k+2 bysaying the bottom element in column 3i is less than the top elementin column 3i for i =1, …, j and the top element of column3j+3 is less than the bottom elements of column 3j+3.First if j=k,then we will have a copy of D_3k+1 which gives a contributionof C_3k+1 to the number of linear extensions of G̅_6k+4.If j < k, then we will have a situation aspictured in Figure <ref> in the case where k =6 and j=2.One can see that when one straightens out the resulting Hasse diagram, one obtains a diagram thatstarts with the Hasse diagram of D_2+3j and all those elements must beless than the elements in the top part of Hasse diagram which is a copy ofthe Hasse diagram of G̅_6(k-j-1)+2.Let P_n be the matrix that is obtained from the matrix M_n byreplacing the elements C_m in the lastcolumn by C_m-1. For example we have picturedin P_7 in Figure <ref>. It is then easy to see thatdet(P_1) = 1. For n >1 if we expand the determinant by minorsabout the first row, then we see that we have the recursiondet(P_k)= C_3k-2+∑_j=0^k-2 C_2+3jdet(P_k-j-1),where we set det(P_0)=1.For example, suppose that we expand the determinant P_7 picturedin Figure <ref> about the element ofC_19 in the first row. Then in the next five rows, we would be forced toexpand about the -1's.It is easy to see that the total sign ofthese expansion is always +1 so that in this case, we would get acontribution ofC_19 to the det(P_7). Expanding the determinant about theother elements in the first row gives the remaining terms of the recursionjust like it did in the expansion of the determinant of M_n. Thus it follows that B_n = det(P_n) for all n.Hence the contribution of fixed points of J_τ to U_τ,n(y)in the Case 2.2 equals -∑_k=0^⌊n-6/6⌋det(P_k+1) y^3k+2(U_τ,n-6k-4+yU_τ,n-6k-5). Therefore, we obtain the recursion for U_τ,n(y) for τ = 142536 is as follows.U_τ,n(y) =   (1-y)U_τ,n-1(y) + ∑_k=0^⌊(n-8)/6 ⌋det(M_k+1) y^3k+3 U_τ,n-6k-7(y)    - ∑_k=0^[(n-6)/6]det(P_k+1)y^3k+2[U_τ,n-6k-4(y) +yU_τ,n-6k-5(y) ]. In Table <ref>, we computed U_142536,n(y) for n ≤ 14.§ THE PROOF OF THEOREM <REF>Let τ_a = τ = τ_1 …, τ_2a whereτ_1, τ_3, …, τ_2a-1 = 12 … a andτ_2 τ_4 …τ_2a = (2a) (2a-1) … (a+1). If we pictureτ_a in a 2-line array like we did in the last section, then we will get a diagramas pictured in Figure <ref>The key property that τ_a has is that if = _1 …_2m ispermutation where we have marked some of the τ_a-matchesby placing an x at the start of a τ so that every elementofis contained in some τ_a-match and any two consecutive markedτ_a inshare at least one element, then it mustbe the case that_1_3 …_2m-1 = 12 … m and _2_4 …_2m =(2m) (2m-1) … (m+1). That is, it must be the case that= τ_m.This can easily be seen from the picture ofoverlapping τ_a-matches like the one pictured inFigure <ref> where a =4 and m=12. Note thatin such a situation, we will in fact have τ_a matches startingat positions 1,3,5, …, 2(m-a)+2 in . We need to show that the polynomialsU_τ_a,n (y) = ∑_O ∈𝒪_τ_a,n, J_τ_a(O) =OO W(O)satisfy the following properties: * U_τ,1(y)=-y, and * for n ≥ 2, U_τ,n(y)= (1-y)U_τ,n-1(y)-∑_k=0^⌊ (n-2a)/(2a)⌋n-(k+1)a-1(k+1)a-1 y^(k+1)a-1 U_τ_a,n-(2(k+1)a)+1(y)+∑_k=0^⌊ (n-2a-2)/(2a)⌋n-(k+1)a-2(k+1)ay^(k+1)a U_τ_a,n-(2(k+1)a)-1(y).Again, it is easy to see that when n = 1, U_τ_a,1(y)=-y. For n ≥ 2, let O=(B,) be a fixed point of J_τ_a where B=(b_1, …, b_t) and =_1 ⋯_n. By the same argument as the previous sections, it must be the case that 1 is in the first cell of O and 2 must be in either cell of 2 or cell 3 in O. Thus, we now have two cases.Case 1. 2 is in cell 2 of O. Similar to Case 1 in the proof of Theorem <ref>, there are two possibilities, namely, either (i) 1 and 2 are both in the first brick b_1 of (B,) or (ii) brick b_1 is a single cell filled with 1 and 2 is in the first cell of the second brick b_2 of O.In either case, we can remove cell 1 from O and subtract 1 from the elements in the remaining cells, we will obtain a fixed point O' of J_τ_a in 𝒪_τ_a,n-1. So the fixed points in this case will contribute (1-y)U_τ_a,n-1(y) toU_τ_a,n(y). Case 2. 2 is in cell 3 of O=(B,).In this case, _2 > _3 =2. Sincemust be increasing in b_1, itfollows that 2 is in the first cell of brick b_2 and there must be a τ_amatch in the cells of b_1 and b_2 which can only start at cell 1.Thus it must be the case that brick b_2 has at least 2a-2 cells. Again, we shall think of O=(B,) as a two line array A(0) wherecolumn i consists of _2i-1 and _2i, reading from bottomto top. Now imagine that A(0) starts with series of τ-matches startingat positions 1,3,5, …. Our observation above shows that if this sequenceof consecutive τ_a-matches covers cells 1, … ,2k for somek, then in the two line array A(O), all in entries in the firstrow of the first k columns are less than all the entries in top rowof the first k columns, the cells in the bottom row of the firstk columns are increasing, reading from left to right, and the cellsin top row are increasing, reading from right to left.Next we consider the possible brick structures of O=(B,). We claimthat we are in one of two subcases:Subcase (2.A) where there is ak ≥ 0 suchthat there are τ_a-matches instarting at cells1, 3, 2a+1, 2a+3, … ,2(k-1)a+1,2(k-1)a+3,2ka+1, there is no τ_a-match instarting at cell 2ka+3, 2 =b_1 =b_3 = ⋯ = b_2k-1,2a-2 = b_2 =b_4 = ⋯ = b_2k, and b_2k+1 =2 and b_2k+2≥ 2a-2 orSubcase (2.B) where there is ak ≥ 0 suchthat there are τ_a-matches instarting at cells1, 3, 2a+1, 2a+3, … ,2(k-1)a+1,2(k-1)a+3,2ka+1,2ka+3,there is no τ_a-match instarting at cell 2(k+1)a+1, 2 =b_1 =b_3 = ⋯ = b_2k-1=b_2k+1,2a-2 = b_2 =b_4 = ⋯ = b_2k+2, and b_2k+3≥ 2. Subcase (2.A) is picturedat the top of Figure <ref> and Subcase (2.B) is pictured at thebottom of Figure <ref> in the case where a=4 and k=2.Note that by our remarks above, we alsoknow the relative order of the elements involved in these τ_a-matches in which is indicated by the poset whose Hasse diagram is pictured in Figure <ref>. We can prove this by induction. That is, suppose k =0 and we are in Subcase (2.A).Then there is a τ_a-match in starting a cell 1 but no τ_a-match instarting at cell 3.Our argument above shows that b_1=2 and b_2 ≥ 2a-2.Next suppose that k =0 and we are in Subcase (2.B) so that there are τ_a-matches instarting in cells 1 and 3 but thereis no τ_a-match instarting at cell 2a+1. Then we claimwe claim that b_2 =2a-2.That is, in such a situationwe would know that _2a > _2a+1. Thus,if b_2 > 2a-2, then 2a would be labeled with a y. The τ_a-matchstarting at cell 1 ends at cell 2a so that cell 2a would satisfyCase I of our definition of J_τ_a whichcontracts that the fact thatO=(B,) is a fixed point of J_τ_a. Thus, brick b_3 must start at cell 2a+1.Now the fact that _2a > _2a+1 implies that b_3 ≥ 2 sincethere must be a τ_a-match that involves _2a and _2a+1 and lies in cells of b_2 and b_3. Now assume by induction that for k ≥ 1,there are τ_a-matches instarting at cells1, 3, 2a+1, 2a+3, … ,2(k-1)a+1,2(k-1)a+3, 2 =b_1 =b_3 = ⋯ = b_2k-1, 2a-2 = b_2 =b_4 = ⋯ = b_2k-2, and b_2k≥ 2a-2. Suppose we are in Subcase (2.A) so that there is τ_a-matchstarting at cell 2ka+1 but there is no τ_a starting at cell 2ka+3. Then we know that _2ka > _2ka+1 due to the τ_a-matchinstarting at cell 2(k-1)a+1. It cannot be the case that b_2k > 2a-2 sincethen cells 2ka and 2ka+1 are contained in brick b_2k so that cell 2ka wouldbe marked with a y.However, the τ_a-match staring at cell 2(k-1)a+1, which isthe first cell of b_2k, ends at cell 2ka so that cell 2ka wouldsatisfy Case I of our definition of J_τ_a which violates our assumptionthat (B,) is a fixed point of J_τ_a.This meansthat b_2k=2a-2 and b_2k+1 starts at cell 2ka+1. Since _2ak > _2ak+1due to the τ_a-match instarting at cell 2(k-1)a+3, we knowthat there must be a τ_a-match contained in the cells of b_2k and b_2k+1so that b_2k+1≥ 2. But then because of the τ_a-match instartingat cell 2ka+1, we know that _2ka+2 > _2ka+3. It cannot bethat cell 2ka+3 is in brick b_2k+1 because then cell 2k+2 would be markedwith a y and there is a τ_a-match instarting at cell 2(k-1)a+3 whichends at cell 2k+2 which is contained in the bricks b_2k and b_2k+1 whichmeans that cell 2ka+2 wouldsatisfy Case 1 of our definition of J_τ_a which violates our assumptionthat (B,) is a fixed point of J_τ_a. Thus it must be the casethat b_2k+1 =2 and brick b_2k+2 starts at cell 2ka+3. But thismeans that there must be a τ_a-match incontained in the cellsof b_2k+1 and b_2k+2 so that b_2k+2≥ 2a-2.Now if there is also a τ_a-match instarting at cell 2ka+3,then we claim that b_2k+2=2a-2.That is, we knowthat _2(k+1)a > _2(k+1)a+1. It cannot be thatb_2k+2 > 2a-2 becausethen cell 2(k+1)a would be labeled with a y and the τ_a-match instartingat cell 2ka+1 ends at cell 2(k+1)a and is contained in the bricksb_2k+1 and b_2k+2 so that cell 2(k+1)a would satisfy Case 1 of ourdefinition of J_τ_a which would violate our assumption that(B,) is fixed point of J_τ_a.Thus b_2k+2=2a-2. Butthen due to the τ_a-match instarting at cell2(k+1)a+3, we know that _2(k+1)a > _2(k+1)a+1 which meansthat there must be a τ_a match contained in bricks b_2k+2 and b_2k+3.This means that b_2k+3≥ 2. Thus we have two cases to consider.Subcase (2.A) There is ak ≥ 0 suchthat there are τ_a-matches instarting at cells1, 3, 2a+1, 2a+3, … ,2(k-1)a+1,2(k-1)a+3,2ka+1, there is no τ_a-match instarting at cell 2ka+3, 2 =b_1 =b_3 = ⋯ = b_2k-1,2a-2 = b_2 =b_4 = ⋯ = b_2k, and b_2k+1 =2 and b_2k+2≥ 2a-2.In this case, we claim that {1, …, (k+1)a+1} = {_1, _3, …, _2(k+1)a-1,_2(k+1)a}.That is, if one considers the diagram at the top ofFigure <ref>, then the elements in the bottom row are1,2, …, (k+1)a, reading from left to right, and the elementat the top of column (k+1)a is equal to (k+1)a+1.If this is not the case, then let i = min({1, …, (k+1)a+1} - {_1, _3, …, _2(k+1)a-1,_2(k+1)a}). This means _2(k+1)a > i and, hence one can see by the relative order of the elementsin the first (k+1)a columns of A(O) that i can not lie in the first(k+1)a columns. Then the question is for what j is _j=i.First we claim thatitcannot be that _2(k+1)a+1 =i. That is, in such asituation, _2(k+1)a> _2(k+1)a+1.Now it cannot be that_2(k+1)a and _2(k+1)a+1 lie in brick b_2k+2 becausethen the τ_a-match inthat starts inthe first cell of b_2k+1 ends at cell 2(k+1)a which means thatcell 2(k+1)a would be labeled with a y and satisfy Case I of ourdefinition of J_τ_a which would violate our assumption that(B,) is fixed point of J_τ_a. Thus it must be the casethat brick b_2k+3 starts at cell 2(k+1)a+1. But then theremust be a τ_a-match incontained in the cells of bricksb_2k+2 and b_2k+3 which would imply that there is a τ_a-matchinstarting at cell2ka+3 which violates our assumption in this case.Hence j > 2(k+1)a+1 which implies that both _j-2 and _j-1 are greater than _j =i. But thenthere could be no τ_a-match inwhich contains both_j-1 and _j because the only role that i could play inτ_a-match inwould be 1 under those circumstances. It follows that if we remove the elements in A(0) from the first(k+1)a-1 columns plus the bottom element of column (k+1)a, then(B','), where B'=(b_2k+2-(2a-1),b_2k+3, …, b_t)and ' = red(_2(k+1)a…_n), willbe a fixed point of J_τ_a of size n-(2(k+1)a)+1.Note thatin such a situation, we will haven-(k+1)a-1(k+1)a-1 ways to choose the elements ofthat lie in the top rows of the first (k+1)a-1 columns of A(O).Note that the powers of y coming from the bricks b_1, …, b_2k isy^ka and the powers of y coming from bricks b_2k+1 and b_2k+2is -y^a-1. It followsthat the elements in Subcase (2.A) contribute -∑_k=0^⌊ (n-2a)/(2a)⌋n-(k+1)a-1(k+1)a-1y^(k+1)a-1 U_τ_a,n-(2(k+1)a)+1(y)to U_τ_a,n(y).Subcase (2.B). There is ak ≥ 0 suchthat there are τ_a-matches instarting at cells1, 3, 2a+1, 2a+3, … ,2(k-1)a+1,2(k-1)a+3,2ka+1,2ka+3, there is no τ_a-match instarting at cell 2(k+1)a+1, 2 =b_1 =b_3 = ⋯ = b_2k-1=b_2k+1,2a-2 = b_2 =b_4 = ⋯ = b_2k+2, and b_2k+3≥ 2.In this case, we claim that {1, …, (k+1)a+2} = {_1, _3, …, _2(k+1)a+1,_2(k+1)a+2}.That is, if one considers the diagram at the bottom ofFigure <ref>, then the elements in the bottom row are1,2, …, (k+1)a+1, reading from left to right, and the elementat the top of column (k+1)a+1 is equal to (k+1)a+2.If this is not the case, then let i = min({1, …, (k+1)a+2} - {_1, _3, …, _2(k+1)a+1,_2(k+1)a+2}).This means _2(k+1)a+2 > i and, hence one can see by the relative order of the elementsin the first (k+1)a+1 columns of A(O) that i can not lie in the first(k+1)a+1 columns. Then the question is for what j is _j=i.First we claim thatitcannot be that _2(k+1)a+3 =i. That is, in such as situation, _2(k+1)a+2> _2(k+1)a+3.Now it cannot be that_2(k+1)a+2 and _2(k+1)a+3 lie in brick b_2k+3 becausethen the τ_a-match inthat starts inthe first cell of b_2k+2 ends at cell 2(k+1)a+2 which means thatcell 2(k+1)a+2 would be labeled with a y and satisfy Case I of ourdefinition of J_τ_a which would violate our assumption that(B,) is fixed point of J_τ_a. Thus it must be the case b_2k+3=2that brick b_2k+4 starts at cell 2(k+1)a+3. But then theremust be a τ_a-match incontained in the cells of bricksb_2k+3 and b_2k+4 which would imply that there is a τ_a-matchinstarting at cell2(k+1)a+1 which violates our assumption in this case.Hence j > 2(k+1)a+3 which implies that both _j-2 and _j-1 are greater than _j =i. But thenthere could be no τ_a-match inwhich contains both_j-1 and _j because the only role that i could play inτ_a-match inwould be 1 under those circumstances. It follows that if we remove the elements in A(0) from the first(k+1)a+1 columns plus the bottom element of column (k+1)a+2, then(B','), where B'=(b_2k+3-1,b_2k+4, …, b_t)and ' = red(_2(k+1)a+2…_n, willbe a fixed point of J_τ_a of size n-(2(k+1)a)-1.Note thatin such a situation, we will haven-(k+1)a-2(k+1)a ways to choose the elements ofthat lie in the top rows of the first (k+1)a-1 columns of A(O).Note that the powers of y coming from the bricks b_1, …, b_2k_2 isy^(k+1)a. It followsthat the elements in Subcase (2.B) contribute ∑_k=0^⌊ (n-2a-2)/(2a)⌋n-(k+1)a-2(k+1)ay^(k+1)aU_τ_a,n-(2(k+1)a)-1(y)to U_τ_a,n(y).Therefore, the recursion for the polynomials U_τ,n(y) is given by U_τ,n(y)= (1-y)U_τ,n-1(y)-∑_k=0^⌊ (n-2a)/(2a)⌋n-(k+1)a-1(k+1)a-1 y^(k+1)a-1 U_τ_a,n-(2(k+1)a)+1(y)+∑_k=0^⌊ (n-2a-2)/(2a)⌋n-(k+1)a-2(k+1)ay^(k+1)a U_τ_a,n-(2(k+1)a)-1(y).This concludes the proof of Theorem <ref>. 20AAM R.E.L. Aldred, M.D. Atkinson, and D.J. McCaughan,Avoiding consecutive patterns in permutations, Adv. in Applied Math., 45: Issue 3 (2010), 449-461.BR Q.T.Bach and J.B. Remmel, Generating functions for descents overpermutations which avoid sets of consecutive patterns, Australian Journalof Combinatorics. 64 (2016), 194-231. BR2 Q.T. Bach and J.B. Remmel, Descent c-Wilf equivalence, to appearin Discrete Mathematics and Theoretical Computer Science.B1 A.M. Baxter, Refining enumeration schemes to count accordingto inversion number, Pure Mathematics and Applications, 21 (2) (2010),136-160. B2 A.M. Baxter, Refining enumeration schemes to count accordingto permutation statistics, Electronic J. Comb., 21: Issue 2 (2014). DK V. Dotsenko and A. Khoroshkin, Anick-type resolutions andconsecutive pattern avoidance, arXiv:1002.2761v1 (2010). DR A. Duane and J. Remmel, Minimal overlapping patterns in colored permutations, Electronic J. Combinatorics, 18 (2) (2011). Eg1 O. Eğecioğlu and J. B. Remmel, Brick tabloids and the connection matrices between bases of symmetric functions, Discrete Appl. Math., 34 (1991), no. 1-3, 107–120, Combinatorics and theoretical computer science (Washington, DC, 1989).EN S, Elizalde and M. Noy, Consecutive patterns in permutations, Adv. in Appl. Math. 30 (2003), no. 1-2, 110-125, Formal power series and algebraic combinatorics (Scottsdale, AZ, 2001).EN2 S. Elizalde and M. Noy, Clusters, generatingfunctions and asymptotics for consecutive patterns in permutations, Advances in App. Math., 49(2012), 351-374. EKP R. Ehrenborg, S. Kitaev, and P. Perry, A spectral approachto consecutive pattern-avoiding permutations, J. of Combinatorics,2 (2011), 305-353. GJ I.P. Goulden and D.M. Jackson, Combinatorial Enumeration, A Wiley-Interscience Series in Discrete Mathematics, John Wiley & Sons Inc, New York, (1983).JR1 M. Jones and J. B. Remmel, Pattern Matching in the Cycle Structures of Permutations, Pure Math. and Applications, 22 (2011),173-208.JR M. Jones and J. B. Remmel, A reciprocity approach to computinggenerating functions for permutations with no pattern matches,Discrete Mathematics and Theoretical Computer Science,DMTCS Proceedings, 23 International Conference on FormalPower Series and Algebraic Combinatorics (FPSAC 2011), 119 (2011),551-562. JR2 M. Jones and J. Remmel, A reciprocity method forcomputing generating function over the set of permutationswith no consecutive occurrences of τ,Discrete Mathematics, 313 Issue 23 (2013), 2712-2729.JR3 M. Jones and J. Remmel, Generating functionsfor the number of permutations with noconsecutive occurrences of1p23 ⋯ (p-1) or 13⋯(p-1)2p, to appear in Pure Mathematics and Applications.Kit1 S. Kitaev, Partially ordered generalized patterns, Discrete Math.298 (2005), 212-229. Kitbook S. Kiteav, Patterns in permutations and words,Springer-Verlag, 2011. MenRemA. Mendes and J.B. Remmel,Permutations and words counted byconsecutive patterns, Adv. Appl. Math, 37 4, (2006), 443-480.oeis N. J. A. Sloane, The on-line encyclopedia of integer sequences, published electronically at * http://www.research.att.com/njas/sequences/. Stanley R.P. Stanley, Enumerative Combinatorics, vol. 2,Cambridge Studies in Advanced Mathematics 62, Cambridge UniversityPress, (1999).
http://arxiv.org/abs/1702.08125v1
{ "authors": [ "Quang T. Bach", "Jeffrey B. Remmel" ], "categories": [ "math.CO" ], "primary_category": "math.CO", "published": "20170227021437", "title": "Generating functions for permutations which avoid consecutive patterns with multiple descents" }
Department of Physics, Shanghai Normal University, Shanghai 200234, China College of Science, China Three Gorges University, Yichang 443002, China Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542xlfeng@shnu.edu.cn Department of Physics, Shanghai Normal University, Shanghai 200234, China Centre for Quantum Technologies, National University of Singapore, 2 Science Drive 3, Singapore 117542Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 In this work, the absorption spectrum of a two-level ion in a linear Paul trap is investigated, the ion is supposed to be driven by two orthogonal laser beams, the one along the axial of the trap acts as the control light beam, the other as probe beam. When the frequency of the control laser is tuned to the first red sideband of the ionic transition, the coupling between the internal states of the ion and vibrational mode turns out to be a Jaynes-Cummings (JC) Hamiltonian, which together with the coupling between the probe beam and the two-level ion constructs a Λ-type three-level structure. In this case the transparency window may appear in the absorption spectrum of the probe light, which is induced by the ionic vibration and is very similar to the cavity induced transparency [1996 Opt. Commun. 126230-235]. On the other hand, when the frequency of the control laser is tuned to the first blue sideband of the ionic transition, the two-level ion and vibrational mode are governed by an anti-Jaynes-Cummings (anti-JC) Hamiltonian, the total system including the probe beam forms a V-type three-level structure. And the Autler-Townes splitting in the absorption spectrum is found.Ionic vibration induced transparency and Autler-Townes splitting C. H. Oh December 30, 2023 ================================================================§ INTRODUCTION Quantum interferences, which may occur in many quantum processes along alternative pathways, play a very significant role in quantum mechanics. The superpositions of the probability amplitudes in different pathways give rise to phenomena analogous to constructive and destructive interference between classical waves. In quantum optics many valuable applications of quantum interferences such as coherent population trapping <cit.>, lasing without inversion <cit.> and electromagnetically induced transparency (EIT) <cit.> have been examined. As for the EIT, it is the quantum interference that makes the transparency of a weak probe light through an originally opaque atomic medium in a narrow spectral window with the help of a strong control laser beam. Electromagnetically induced transparency has been studied extensively and generalized in different ways; several EIT-like phenomena have also been predicted and some of them have been observed experimentally. For instance, Rice and Brecha predicted that, in cavity QED where a single-mode cavity contains a two-level atom, a hole in the absorption spectrum of the atom may emerge at line center for a weak probe light, this phenomenon is exactly due to the quantum interference between two different transition paths induced by the cavity field, and it is called cavity induced transparency (CIT) <cit.>. It was shown that the vacuum Rabi splitting in a cavity QED can even induce the transparency of a probe light in the Λ-type three-level system, called vaccum induced transparency <cit.>. In a cavity-optomechanical system<cit.>, the optomechanically induced transparency was realized in experiment <cit.>, in which an optical light, tuned to a sideband transition of a micro-optomechanical system, acts as control field, and the intracavity field acts as a probe field, such a system forms a Λ-type three-level structure, and the presence of the control field can thus induce the transparency of the probe field. Electromagnetically induced transparency and EIT-like phenomena have many potential applications in controlling the propagation properties of the medium for light such as absorption coefficient, refractive index, propagating speed and nonlinearity etc. <cit.>, and in quantum information processing such as quantum information memory <cit.>.Besides, a phenomenon similar to EIT, known as Autler-Townes splitting (ATS)<cit.>, also displays a dip in the absorption spectrum of a weak probe field in a quantum system appropriately coupling to a strong driving field. Differently, ATS is not contributed to destructive interference but to the driving-field-induced shift of the transition frequency <cit.>. Abi-Salloum analyzed three-level systems: Λ , V and two ladder with upper- and lower-level driving respectively, and found EIT mainly appears in Λ and upper-driven ladder three-level systems and ATS in V and lower-driven ladder three-level systems <cit.>. Anisimov et al. proposed an objective method on discerning ATS from EIT <cit.>.On the other hand, ion traps have been developed to be a state-of-the-art technique for solving problems in quantum mechanics, quantum optics and quantum information processing etc., for instance, the famous Jaynes-Commings (JC) model and various generalized JC models were realized experimentally for two-level ions and the phonons of the ionic vibration<cit.>; the controlled-NOT operation proposed by Cirac and Zoller<cit.> was realized in experiment as well <cit.> . Moreover, very recently, high-fidelity trapped-ion-based quantum logic gates <cit.> and those with multi-element qubits <cit.> were demonstrated; and Shor's algorithm <cit.> and the quantum simulation of lattice gauge theories <cit.> were realized as the first step towards a real quantum computer.As trapped ions can provide us a system governed by JC or anti-JC Hamiltonian as cavity QED system does, one may naturally ask the following question whether one can resort to the ionic vibration to realize the transparency of a probe light. In this paper, we give a positive answer to this question. Our results show when the control laser light is tuned to the first red sideband of the ionic transition (corresponding to the JC model), a transparency window in the absorption spectrum of the probe light emerges, we refer to such a phenomenon as ionic vibration induced transparency (VIT). And when the control laser light is tuned to the first blue sideband of the ionic transition (corresponding to the anti-JC model), Autler-Townes splitting emerges, which also displays a dip (or reduction, hole) in the absorption spectrum.The rest of this paper is organized as follows, in Sec. 2, we describe the theoretical model of our scheme and the Hamiltonian for the driven trapped ion in Lamb-Dicke regime. In Sec. 3 we investigate the VIT when the frequency of the control light is tuned to the first red sideband of the ionic transition; in Sec. 4 we show the ATS when the frequency of the control light is tuned to the first blue sideband of the ionic transition. Finally, we end with discussion and conclusion in Sec. 5.§ THEORETICAL MODEL The system we consider here is a single two-level ion confined in a linear Paul trap, where the strength of the radial confinement is assumed to be largely stronger than that along the axial direction, the movement in the radial direction can thus be ignored <cit.> and one considers only the center-of-mass mode of the ionic vibration. We assume the ion in the Paul trap is driven by two orthogonal laser beams, one is along the axial (or longitudinal) direction of the trap, the other along the radial (or transverse) direction. We further assume the longitudinal laser beam is a traveling wave with frequency ω _L and it acts as the control light beam. The transverse laser beam is a weak laser light with a tunable frequency ω _P and it acts as the probe field.As the motion of the ion is mainly in the longitudinal direction and the transverse motion can almost be ignored, the Hamiltonian for this system is thus given by <cit.> H =ħ/2ω _aσ _z+ħ (ν a^†a+1 /2) +ħ/2Ω( σ _++σ _-) × [ e^iη( a^†+a) -iω _Lt+e^-iη( a^†+a) +iω _Lt]+ iħε( σ _-e^iω _Pt-σ_+e^-iω _Pt) ,where σ _z=| e⟩⟨ e| - | g⟩⟨ g| , σ _+=| e⟩⟨ g| , and σ _-=| g⟩⟨ e| with | e⟩ and | g⟩ being the excited and ground states of the ion respectively, a^† and a are the creation and annihilation operators for the center-of-mass motion of the trapped ion respectively, Ω ( ε ) is the Rabi frequency of the longitudinal (transverse) laser field, and η =k_L/ √(2mν) is the Lamb-Dicke parameter, with k_L=ω _L/c being the wave vector of the longitudinal laser field.We suppose the trapped ion is constrained in the Lamb-Dicke regime, and the Lamb-Dicke parameter η meets the condition η≪ 1. The Hamiltonian H can be approximated by the expansion to the first order in η, H ≈ħ/2ω _aσ _z+ħ (ν a^†a+ 1/2) +ħ/2Ω( σ _++σ _-) × [ e^-iω _Lt( 1+iη a+iη a^†) +e^iω _Lt( 1-iη a-iη a^†) ] +iħε( σ _-e^+iω _Pt-σ _+e^-iω _Pt). In the following we will investigate the two different cases in which the frequency of the longitudinal laser beam, ω _L, is respectively tuned to the first red sideband or the first blue sideband of the ionic internal transition frequency ω _a. In the red-detuning case, when the frequency of the longitudinal laser beam is tuned to the first red sideband of the atomic transition, the frequency of the control light satisfies ω _L=ω _a-ν; we apply a unitary transformation to the Hamiltonian Hto deal with the counterrotating-wave terms in Eq. (2), H^'=iħ∂ U_R^†/∂ t U_R+U_R^†HU_R,where U_R=exp{ -i[ 1/2ω _Pσ _z+( ω _P-ω _L) a^†a] t} .Then the Hamiltonian of the system can be simplified by discarding the rapidly oscillating terms (taking the rotating-wave approximation), and we finally obtain H^'=H_JC+iħε( σ _--σ _+) ,where Δ =ω _a-ω _P is the detuning between the ionic transition and probe field and H_JC is the JC Hamiltonian taking the form H_JC=ħ/2Δσ _z+ħΔ a^†a+ iħ/2ηΩ( σ _+a-σ _-a^†) . In the blue-detuning case, the frequency of the longitudinal laser beam is setted to ω _L=ω _a+ν, that is, the longitudinal laser is tuned to the first blue sideband of the atomic transition. Here we apply the following unitary transformation to the Hamiltonian H (Eq. (2)), H^''=iħ∂ U_B^†/∂ t U_B+U_B^†HU_B,where U_B=exp{ -i[ 1/2ω _Pσ _z+( ω _L-ω _P) a^†a] t} .Similar to the red-detuning case, we obtain an anti-JC Hamiltonian by utilizing the rotating-wave approximation H^''=H_AJC+iħε( σ _--σ _+) ,and the anti-JC Hamiltonian takes the form H_AJC=ħ/2Δσ _z-ħΔ a^†a+ iħ/2ηΩ( σ _+a^†-σ _-a) . If we concentrate on the case that the motion of the ion is nearly confined to its ground state and the probe light is very weak, only the zero- and one-phonon states of the vibration of the ion need to be taken into account. Following Ref. <cit.> the total states of the system in the red-detuning case are now spanned by {| 0g⟩ , | 0e⟩ , | 1g⟩}, where | 0g⟩≡| 0⟩⊗| g⟩ and so on, here the numerical index, 0 or1, indicates the phonon number state. The energy level structure of these three states is sketched in Fig. 1(a). Similarly, in the blue-detuning case the total states of the system are spanned by{| 0g⟩ , | 0e⟩ , | 1e⟩}. The energy level structure in this case is sketched in Fig. 1(b).§ VIT IN RED-DETUNING CASE Now let us consider the spontaneous emission of the ionic excited state and the heating effect of the vibrational motion induced by coupling to the environment. We suppose the interaction between the ionic internal states and its reservoir is weak, so is the interaction between the vibrational mode and its reservoir. Thus one can adopt the Born and Markov approximations to deal with spontaneous emission and the heating effect, and the master equation for such a system takes the form ρ̇ = -i/ħ[ H^',ρ] +κ ( n+1)( 2aρ a^†-a^†aρ -ρ a^†a) +κn( 2a^†ρ a-aa^†ρ -ρ aa^†)+γ/2( 2σ _-ρσ _+-σ _+σ _-ρ -ρσ _+σ _-) ,where κ is the heating rate for vibrational motion and n is the average thermal phonon number, γ is the spontaneous emission rate, here we have assumed that the ionic excited state couples to the vacuum reservoir of the electromagnetic field. As supposed above, the vibration of the ion is nearly confined to its ground state, so the average thermal phonon n is almost zero.The elements of the density matrix in the states {| 0g⟩ , | 0e⟩ , | 1g⟩} take the following form according to the master equation (11):ρ̇_0g;0g = γρ _0e;0e+2κρ _1g;1g+ε( ρ _0e;0g+ρ _0g;0e) , ρ̇_0g;0e = ( iΔ -γ/2) ρ _0g;0e+ ηΩ/2ρ _0g;1g -ε( ρ _0g;0g-ρ_0e;0e) ,ρ̇_0g;1g = ( iΔ-κ) ρ _0g;1g-ηΩ/2ρ _0g;0e+ερ _0e;1g,ρ̇_0e;0e = -γρ _0e;0e+ηΩ/2( ρ _0e;1g+ρ _1g;0e)-ε( ρ _0g;0e+ρ _0e;0g),ρ̇_0e;1g = -( κ +γ/2 ) ρ _0e;1g-ερ _0g;1g +ηΩ/2( ρ _1g;1g-ρ_0e;0e),ρ̇_1g;1g = -2κρ _1g;1g-ηΩ/2( ρ _0e;1g+ρ _1g;0e) . We suppose both the internal state and the vibrational mode of the motion of the ion are initially in their ground state, | 0g⟩ , that is, ρ _0g;0g(0)=1,ρ _0e;0e(0)=ρ _1g;1g(0)=ρ _0e;1g(0)=0. In order to examine the prperties of the refraction and the absorption for the probe light, we adopt the expression of the complex susceptibility which is given by χ =χ ^'+iχ ^''∝( ρ _0g;0e/iε) <cit.>, where the real part χ ^' stands for the index of refraction of the medium and the imaginary part χ ^'' is proportional to the absorption coefficient. Hence our task is to solve the equations about the elements of the density matrix in order to obtain ρ _0g;0e. For simplicity, in the following we only focus on the steady-state solution of the elements of the density matrix by setting their first derivatives with respect to time to zero. We finally get the steady-state solution to ρ _0g;0e and hence yielding the information about the absorption coefficient χ ^'' and dispersion coefficient χ ^' of a weak probe field. This is given by ρ _0g;0e=ε( iΔ -κ) /( iΔ -γ/2) ( iΔ -κ) +(ηΩ/2) ^2.The numerical results of Im[ρ _0g;0e/iε ] are given in Fig.2, which shows there exists VIT for the probe light when its frequency is close to the ionic transition frequency. Figure 2(a) indicates that the transparency window becomes wider and deeper as Ω is getting larger if the other parameters are unchanged. Figure 2(b) shows that as the heating rate κ increases the depth of transparency window becomes shallow, which indicates the heating effect makes the system more opaque.§ ATS IN BLUE-DETUNING CASE The master equation for the blue-detuning case can be derived by the way similar to the red-detuning case and it takes the following form ρ̇ = -i/ħ[ H^'',ρ] +κ (n+1)( 2a^†ρ a-aa^†ρ -ρ aa^†)+κn( 2aρ a^†-a^†aρ -ρ a^†a)+γ/2( 2σ _-ρσ _+-σ _+σ _-ρ -ρσ _+σ _-) .Because the ionic vibration is supposed to be confined to its ground state, the average thermal phonon n is almost zero and the motion of the ion is mostly in the zero- or one-phonon state. According to the master equation, the elements of the density matrix in the states {| 0g⟩ , | 0e⟩ , | 1e⟩} are of the form: ρ̇_0g;0g = -2κρ _0g;0g-ηΩ/2( ρ _0g;1e+ρ _1e;0g)+ε( ρ _0e;0g+ρ _0g;0e)+γρ _0e;0e,ρ̇_0g;0e = ( iΔ -2κ -γ/2) ρ _0g;0e-ηΩ/2ρ _1e;0e -ε( ρ_0g;0g-ρ _0e;0e) ,ρ̇_1e;0g = -( 3κ +γ/2 ) ρ _1e;0g+ερ _1e;0e +ηΩ/2( ρ _0g;0g-ρ_1e;1e),ρ̇_1e;0e = ( iΔ -3κ -γ) ρ _1e;0e+ ηΩ/2ρ _0g;0e-ερ _1e;0g,ρ̇_0e;0e = -( 2κ +γ) ρ _0e;0e -ε( ρ _0g;0e+ρ _0e;0g),ρ̇_1e;1e = -( 4κ +γ) ρ _1e;1e+2κρ _0e;0e +ηΩ/2( ρ _0g;1e+ρ_1e;0g) .In the same way, the susceptibility is given by χ =χ ^'+iχ ^''∝( ρ _0g;0e/iε), χ ^' and χ ^'' are related to the refraction of the medium and the absorption coefficient, respectively.As done in Sec. III, the steady-state solution to ρ _0g;0e can be derived by setting the derivatives of the elements of the density matrix in Eqs. (20-25) to zero in the initial condition of ρ _0e;0e(0)=1, ρ _0g;0g(0)= ρ _1e;1e(0)=ρ _1e;0g(0)=0. Thus the solution to ρ_0g;0e is ρ_0g;0e=ε(iΔ-3κ -γ)/ (iΔ -2κ-γ/2)(iΔ -3κ -γ)+ (ηΩ/2)^2.In Fig. 3 we plot the numerical result of Im[ρ _0g;0e/iε ] as the function of Δ/γ. Similar to the red-detuning case, a dip in absorption spectrum of the probe light emerges slowly in such a case. It is obvious that the dip becomes deeper and wider with the increase of Ω under the condition of the other parameters unchanged.§ DISCUSSION AND CONCLUSION The absorption spectra of the probe light in both cases of red-detuning and blue-detuning are investigated, and a dip in the spectrum can emerge in both cases. Differently, in the red-detuning case, the energy level configration is of Λ-type three structure and the dip in absorption spectrum exhibits the properties of EIT, that is, a narrow and deep dip can appear when the driving light is not so strong, while in the blue-detuning case, the energy level configration takes V-type three structure and the dip exhibits the properties of ATS, the appearance of the dip requires a stronger driving light and the dip is either narrow but shallow or deep but wide, i.e., the dip cannot be narrow and deep at the same situation.Our proposal about the VIT may be verified experimentally. On the one hand, the techniques for ion traps have been utilized to realize much complicated quantum process <cit.> and quantum logic gates <cit.> as mentioned in Sec. I, such techniques pave a way for the VIT and ATS presented here. On the other hand, vaccum induced transparency in a cavity <cit.> indicates that the transparency of light can be achieved for several or even a single atom(s), thus our proposal should be realized experimentally.To summarize, in the present work we have investigated ionic vibration induced transparency and Autler-Townes splitting in a linear Paul trap. When control light is tuned to the first red sideband of the ionic transition, the VIT emerges and it is very similar to the CIT <cit.>. When the control light is tuned to the first blue sideband of the ionic transition, the ATS emerges via anti-JC Hamiltonian. We find in both cases the dip in the absorption spectra becomes wider and deeper as the Rabi frequency of the control light increases.§ ACKNOWLEDGEMENT This work is supported by the Natural Science Foundation of Shanghai(Grant No. 15ZR1430600), National Natural Science Foundation of China under Grant Nos. 61475168, 11674231, 11574179 and 11074079. XLF is sponsoredby Shanghai Gaofeng & Gaoyuan Project for University Academic Program Development.99 CPT G. Alzetta, A. Gozzini, L. Moi, and G. Orriols, An experimental method for the observation of r.f. transitions and laser beat resonances in oriented Na vapour, Nuovo Cimento Soc. Ital. Fis., B 36 , 5 (1976).CPT2 R. M. Whitley and C. R. Stroud, Jr., Double optical resonance, Phys. Rev. A 14, 1498 (1976).LWI S. E. Harris, Lasers without inversion: Interference of lifetime-broadened resonances, Phys. Rev. Lett. 62, 1033 (1989).LWI2 M. O. Scully, S.-Y. Zhu, and A. Gavrielides, Degenerate quantum-beat laser: Lasing without inversion and inversion without lasing, Phys. Rev. Lett. 62, 2813 (1989).EIT1 S. E. Harris, J. E. Field, and A. Imamoğlu, Nonlinear optical processes using electromagnetically induced transparency, Phys. Rev. Lett. 64, 1107 (1990).EIT2 M. Fleischhauer, A. Imamoglu, and J. P. Marangos,Electromagnetically induced transparency: Optics in coherent media, Rev. Mod. Phys. 77, 633 (2005).EIT3 K.-J. Boller, A. Imamolu, and S. Harris, Observation of electromagnetically induced transparency, Phys. Rev. Lett. 66, 2593 (1991).CIT P. R. Rice, R. J. Brecha, Cavity induced transparency, Opt. Comm. 126, 230 (1996).VaIT0 J. E. Field, Vacuum-Rabi-splitting-induced transparency, Phys. Rev. A 47, 5064 (1993).VaIT H. Tanji-Suzuki, W. Chen, R. Landig, J. Simon, V. Vuletić, Vacuum-induced transparency, Science 333 , 1266 (2011).COM M. Aspelmeyer, T. J. Kippenberg, F. Marquardt, Cavity optomechanics, Rev. Mod. Phys. 86, 1391 (2013).OMIT1 A. Schliesser, Optomechanically induced transparency,Science 330, 1520 (2010).OMIT2 P.-C. Ma, J.-Q. Zhang, Y. X., M. Feng, and Z.-M. Zhang, Tunable double optomechanically induced transparency in an optomechanical system, Phys. Rev. A 90, 043825 (2014).Lukin M. D. Lukin, Colloquium: Trapping and manipulating photon states in atomic ensembles, Rev. Mod. Phys. 75, 457 (2003).Polariton M. Fleischhauer and M. D. Lukin, Dark-state polaritons in electromagnetically induced transparency, Phys. Rev. Lett. 84, 5094 (2000).ATS1 S. H. Autler and C. H. Townes, Stark Effect in Rapidly Varying Fields, Phys. Rev. 100, 703 (1955). ATS2 R. Shimano, M. Kuwata-Gonokami, Observation of Autler-Townes Splitting of Biexcitons in CuCl, Phys. Rev. Lett. 72, 530 (1994).ATS3 S. Novikov, J. E. Robinson, Z. K. Keane, et al., Autler-Townes splitting in a three-dimensional transmon superconducting qubit, Phys. Rev. B 88, 060503(R) (2013).EITATS T. Y. Abi-Salloum, Electromagnetically induced transparency and Autler-Townes splitting: Two similar but distinct phenomena in two categories of three-level atomic systems, Phys. Rev. A 81, 053836 (2010).EA1 P. M. Anisimov, J. P. Dowling, B. C. Sanders, Objectively discerning Autler-Townes splitting from electromagnetically induced transparency, Phys. Rev. Lett. 107, 163604 (2011).RMP-ion D. Leibfried, R. Blatt, C. Monroe, and D. Wineland, Quantum dynamics of single trapped ions, Rev. Mod. Phys. 75, 281 (2003).Cirac-Zoller J. I. Cirac, and P. Zoller, Quantum computations with cold trapped ions, Phys. Rev. Lett. 74, 4091 (1995).Cirac-ZollerEXP F. Schmidt-Kaler, H. Haffner, M. Riebe et. al., Realization of the Cirac–Zoller controlled-NOT quantum gate, Nature422, 408 (2003).iontrapEXP C.J. Ballance, T.P. Harty, N.M. Linke, M.A. Sepiol, and D.M. Lucas, High-fidelity quantum logic gates using trapped-ion hyperfine qubits, Phys. Rev. Lett. 117, 060504 (2016).Wineland2 J. Gaebler, T. Tan, Y. Lin, Y. Wan, R. Bowler, A. Keith, S. Glancy, K. Coakley, E. Knill, D. Leibfried, and D. Wineland,High-Fidelity Universal Gate Set for ^9Be^+ Ion Qubits, Phys. Rev. Lett. 117, 060505 (2016).Wineland1 T. R. Tan, J. P Gaebler, Y. Lin, Y. Wan, R. Bowler, D. Leibfried, and D. J. Wineland, Multi-element logic gates for trapped-ion qubits, Nature 528, 380 (2015).Blatt1 T. Monz, D. Nigg, E. A. Martinez, M. F. Brandl, P. Schindler, R. Rines, S. X. Wang, I. L. Chuang, R. Blatt, Realization of a scalable Shor algorithm, Science 351, 1068 (2016).Blatt2 E. A. Martinez, C. A. Muschik, P. Schindler, D. Nigg, A. Erhard, M. Heyl, P. Hauke, M. Dalmonte, T. Monz, P. Zoller, R. Blatt, Real-time dynamics of lattice gauge theories with a few-qubit quantum computer, Nature 534, 516 (2016).Scully Marlan O. Scully, M. Suhail Zubairy, Quantum Optics, chapter 7, p226 (1997).Meystre Pierre Meystre, Murray Sargent III, Elements of quantum optics, chapter 9, p245 (2007).
http://arxiv.org/abs/1702.07972v1
{ "authors": [ "Wenjun Shao", "Fei Wang", "Xun-Li Feng", "C. H. Oh" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170226020310", "title": "Ionic vibration induced transparency and Autler-Townes splitting" }
http://arxiv.org/abs/1702.07997v1
{ "authors": [ "Toru Kojo" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170226080018", "title": "QCD in stars" }
Tars: Timeliness-aware Adaptive Replica Selection for Key-Value Stores Wanchun Jiang, Liyuan Fang, Haiming Xie, Xiangqian Zhou, Jianxin WangSchool of Information Science and Engineering, Central South University, Changsha, Hunan, China 410083Email: jiangwc@csu.edu.cn================================================================================================================================================================================================================In current large-scale distributed key-value stores, a single end-user request may lead to key-value access across tens or hundreds of servers. The tail latency of these key-value accesses is crucial to the user experience and greatly impacts the revenue. To cut the tail latency, it is crucial for clients to choose the fastest replica server as much as possible for the service of each key-value access. Aware of the challenges on the time varying performance across servers and the herd behaviors, an adaptive replica selection scheme C3 is proposed recently. In C3, feedback from individual servers is brought into replica ranking to reflect the time-varying performance of servers, and the distributed rate control and backpressure mechanism is invented. Despite of C3's good performance, we reveal the timeliness issue of C3, which has large impacts on both the replica ranking and the rate control, and propose the Tars (timeliness-aware adaptive replica selection) scheme. Following the same framework as C3, Tars improves the replica ranking by taking the timeliness of the feedback information into consideration, as well as revises the rate control of C3. Simulation results confirm that Tars outperforms C3.Replica Selection, Rate Control, Key-Value Stores§ INTRODUCTIONIn current large-scale distributed key-value store system, data is partitioned into small pieces, replicated and distributed across servers for parallel access and scalability. Consequently, a single end-user request may need key-value access from tens or hundreds of servers <cit.>. The tail latency of these key-value accesses decides the response time of the end-user request, which is directly associated with the user experience and the revenue <cit.>. Nevertheless, because the performance of servers is time-varying <cit.>, the tail latency is hard to be guaranteed, and may become long beyond expectation in certain condition. Recent study shows that the 99^th percentile latency can be one order of magnitude larger than the median latency <cit.>, indicatingthat there is a large space to cut the tail latency of key-value accesses. To cut the tail latency, the replica selection scheme, which choose the fastest replica server for each key-value access as much as possible at clients, is crucial <cit.>. Many other methods, including duplicate or reissue requests <cit.> for small tail latency, can also benefit from a good replica selection scheme. However, the replica selection schemes of current classic key-value stores are very simple for efficiency.For example, the OpenStack Swift just randomly reads from a server and retries in case of failures. HBase relies on HDFS, which chooses the physically closest replica server <cit.>. Riak uses an external load balancer such as Nginx <cit.>, which employs the Least-Outstanding Requests (LOR) strategy. According to the LOR strategy, the client chooses the server to which it has send the least number of outstanding requests. MongoDB selects the replica server with smallest network latency <cit.>. Cassandra employs the dynamic snitching strategy, which considers the history of read latencies and I/O load <cit.>. Obviously, all these methods never take the time-varying performance of servers into consideration. Hence, they are hard to ensure the choice of the fastest replica server.In spite of the time-varying performance of servers, the design of replica selection scheme still faces the following challenges. First, as all the clients independently choose the fastest server, they may concurrently access the fastest server, leading to great server performance degradation. The same behavior will subsequently be repeated on a different fast server. Therefore, this kind of herd behavior should be avoided by the replica selection algorithm. Second, the replica selection scheme should be simple enough in the respect of both computation and coordination. Aware of these challenges, an adaptive replica selection scheme C3 is proposed recently <cit.>. C3 piggybacks the queue-size of waiting keys and the service time from the servers to guide the replica ranking at clients, and introduce both the Cubic rate control algorithm <cit.> and backpressure mechanism to adapt the sending rate of keys at the clients to the observed receipt capacity of servers. In this way, C3 can adapt to the time-varying service rate across servers and avoid the herd behavior <cit.>. The great benefit of the innovations on introducing the feedback, and the rate control and backpressure mechinism into the replica selection scheme is confirmed by both the experiments with Amazon EC2 and the at-scale simulations.In this paper, we reveal the timeliness issue of C3, which has large impacts on both the replica ranking and the rate control. First, in the replica ranking of C3, when the network delay is ignored, the server with minimal Q_s/μ_s is the best candidate to cut the tail latency, where Q_s denotes the queue-size of waiting keys at server and μ_s stands for the service rate of that server. But our reproduced simulation shows the estimation accuracy of the Q_s is poor in C3, especially when a concurrency compensation term n*OS_s takes effect. Detailed analysis reveals it is the poor timeliness of the feedback information that leads to the poor estimation accuracy, and the term n*OS_s can not properly reflects the degree of concurrency. Second, due to the timeliness of feedback information, congestion control algorithms for large delay are expected in key-value store. This may be why the Cubic rate control algorithm is utilized by C3. But the goal of rate control in C3 is to adapt the sending rate of keys to a server s, sRate_s, to the reception rate of returned values, rRate_s, from the server s. This is different from that of CUBIC, which adapts the sending rates of all clients to the total service capacity of server. Obviously, as the load of a server s is decided by all these clients instead of a single one, rRate_s can't reflect the total service capacity of server s. Therefore, the goal of rate control in C3 should be revised.Motivated by these observations, we propose the timeliness-aware replica selection (Tars) scheme, improving both the replica ranking and the rate control of C3 in this paper. Tars follows the same framework as C3, and accordingly is simple enough for implementation. Different from C3, Tars piggybacks the incoming rate of keys λ_s and the service rate μ_s from servers, and takes the timeliness of feedback information into consideration. In replica ranking, Tars develops a scoring method without feedback information, when the timeliness of the feedback information is poor. When the feedback information is fresh, Tars estimates the queue-size more accurately with the help of feedback information λ_s and μ_s. Moreover, Tars revises the goal of the rate control in C3,making it consistent with the goal of the congestion control algorithms for Internet <cit.>.Although the timeliness issue is not totally addressed, Tars outperforms C3 with these improvement, as confirmed by the simulations based on the open source code of C3. In sum, we make the following contributions in this paper: * We reveal the timeliness issue of the framework developed by C3, and the drawbacks of C3 on replica ranking and rate control.* To address these issues, we propose the Tars scheme, which considers the timeliness of feedback information in replica ranking and revises the goal of rate control. Simulation confirms the advantages of Tars over C3.The rest of this paper is organized as follows: Section II introduces the background. And then the motivation behind this work is presented in Section III. Subsequently, Section IV describes the design of the Tars scheme and Section V evaluates Tars with simulations based on the open source code of C3. Finally, Section VI concludes this paper. § BACKGROUNDIn the key-value store, when a web sever receives an end-user request, it typically generates tens or hundreds of keys, and needs to access the corresponding values from different servers. The web server is also the client in the following key-value store, as shown in <ref>. For each key, the corresponding value is typically replicated and distributed across different servers. When there is a key to send, client can find the corresponding replica servers via consistent hashing, and select a replica server to send the key for the key-value access. Obviously, to cut the tail latency of key-value accesses, the fastest server is expected in the replica selection of each key at client. On the other hand, a server can receive keys from different clients, and its service rates for keys are time-varying. When the server is busy, the newcome keys will be put into the waiting queue. After a key is served, the corresponding value is returned to the client. It is hard to ensure the choice of the fastest server for every key such that the corresponding value is returned as soon as possible. One reason is that the service time of keys are time varying, as the performance of server is influenced by many factors <cit.>. The other reason is that the size of the waiting keys at server is unknown, due to the large degree of concurrency in key-value access. In other word, to know which server is the fastest, we not only need to obtain the network latency, but also have to capture the waiting time and the service time of keys at server. Furthermore, the herd behavior, where the fast servers are preferred by most of the clients and get great performance degradation due to accompanying concurrent access, should be avoided.Aware of these challenges, C3 suggests the server to monitor the queue-size of the waiting keys and its service time, and piggyback these information to client when the value is returned, as shown in <ref>. The feedback information is utilized for both the replica ranking and the rate control in C3. Briefly, on the reception of a returned value, the client reads the feedback information and adjust the RL based on it via rate control algorithm. When there is a key to be send, the client computes scores of each replica, ranks the replicas based on the scores via the RS scheduler, and then sequently inquires the states of RLs corresponding to these replicas. If the current sending rate is within a RL, the corresponding replica is chosen to sent the key and the inquiry stops. Or else, the following RL corresponding to a higher score replica is inquired. If the current sending rate is not within all the RLs, the backpressure mechanism is triggered and the key is put into the backlog queue until there is at least one server within the RL again.In the replica ranking of C3, the replica server with the smallest expected waiting time q̅_̅s̅*T̅_̅s̅ is preferred, where T̅_̅s̅ is the EWMAs of the feedback service time T_s of a key and q̅_̅s̅ is the queue-size estimation of the waiting keys. q̅_̅s̅ is defined as follows.q̅_̅s̅≜1+q_s+n*os_sHere q_s is the EWMAs of the feedback queue-size Q^f_s, n is the number of client, and os_s is the number of outstanding keys whose values are not yet to be returned. In equation (<ref>), the term n*os_s is considered as the concurrency compensation <cit.>.The specifical scoring function used for replica ranking of C3 is as follows.Ψ_s=R̅_̅s̅-T̅_̅s̅+q̅_̅s̅^3*T̅_̅s̅where R̅_̅s̅ is the Exponentially Weighted Moving Averages (EWMAs) of the response times witnessed by client, and thus the R̅_̅s̅-T̅_̅s̅ is the considered as the delay. Moreover, the term q̅_̅s̅^3*T̅_̅s̅ is the replacement of q̅_̅s̅*T̅_̅s̅ in order to penalizing long queues in Eq. (<ref>), and the mechanism is named as Cubic replica selection in C3. The replica server with the smallest Ψ_s is selected by the RS scheduler, when a key is going to be sent. The rate control and backpressure mechanism is as follows. As shown in <ref>, a client maintains a Rate Limiter (RL) for each server to limit the number of keys sent to the server within a specified time interval δ, named sRate_s. The key will not be sent to a server when the corresponding rate is limited. If the rates of all the replica servers of a key are limited, the key will be put into a backlog queue until the rate limitation of a replica server is released. The detailed rate control algorithm is borrowed from CUBIC <cit.>. Let rRate_s be the number of values received from a server in a δ interval. sRate_s is increased according to the following cubic function when sRate_s<rRate_s.sRate→γ*(Δ T-√(β*R_0/γ))^3 + R_0wherein R_0 is the recorded sRate_s before the previous rate-decrease, Δ T is the elapsed time since the previous rate-decrease event, and γ is constant coefficient. When sRate_s>rRate_s, and a 2*δ hysteresis period after the rate increase, sRate_s is decreased to β*sRate_s, where β is a positive constant smaller than 1. The hysteresis period 2*δ is enforced for the measurement of rRate_s after a rate increase. The rate adjustment is done on the receipt of each returned value, aiming to adapt the sRate_s to the rRate_s, but the rate adjustment result will take effect when there are keys to be sent. With the cooperation of the replica ranking method and the rate control and backpressure mechanism, C3 can adapt to the time-varying service time across servers and avoid the herd behavior, and accordingly achieve high throughput and low tail latency, as confirmed by experiments and simulations in <cit.>. § MOTIVATIONAlthough C3 has great innovation on bringing feedback into the replica ranking and developing the rate control and backpressure mechanism, the detailed replica ranking method and rate control algorithm can be further improved. Specifically, we find the timeliness issue of C3, and the drawbacks of C3 on the estimation of queue-size in the replica ranking and the goal of rate control. For the convenience of reading, we summarize the key notations used in this paper in Table <ref>. §.§ Timeliness of FeedbackThe feedback information plays a key role in above framework of replica selection developed by C3. However, we find the timeliness of the feedback information may be poor frequently. More specifically, the feedback information would be delayed for a propagation time τ_d before it arrives at the client, and there is also a time interval τ_w during the reception of feedback information and the utilization of this feedback information for current replica selection. In fact, we find the value of τ_w can vary in a large range due to the following reasons. First, after a client receives feedback information from a server, it may not send keys to this server for a long time, either because this server doesn't belong to the replica group of the following keys sent by this client, or because this server is not selected due to its poor performance. In this condition, the feedback information can't be renewed timely. Second, even if the client sends key to a server after receiving the feedback information from it, feedback information will be renewed when the value of this key is returned. Obviously in this case, the value of τ_w is larger than the latency of this key-value accesses. As the 99^th percentile latency of key-value accesses can be one order of magnitude larger than the median latency <cit.>, the value of τ_w could also change in a very large range. To exhibit the timeliness of the feedback information, we reproduce simulations in C3 (see part A of section V for detailed simulation configuration), and collect the values of τ_w before the sending of each key. [600000 values are collected. After the CDF is computed, we present only 5% of data to reduce the size of figure, without changing the sharp of curves] The cumulative distribution function of τ_w is shown in <ref>. Consisted with above insights, the τ_w has a very large probability to become as large as hundreds of milliseconds, especially when the server utilization is low, while the network latency τ_d is only in the order of several milliseconds. Therefore, the timeliness of feedback information in above framework is poor. This maybe also the reason why the replica selection algorithms in current classic key-value stores all don't heavily rely on feedback information. Subsequently, we will focus on the impacts of the timeliness of feedback information on the replica ranking and rate control of C3.§.§ Replica RankingDue to the poor timeliness of feedback information, the estimation accuracy of the queue-size of the waiting keys and the service time, both of which are crucial for the replica ranking of in C3, is poor. Specifically, as shown in <ref>, we randomly choose a server s to show its queue-size of waiting keys Q_s at each time when the scoring is executed at clients, as well as all of the feedback Q^f_s received from server s at clients, the os_s and its estimation q̅_̅s̅ on the queue-size of server s in a random simulation time interval. There is a large difference among the piggybacked queue-size Q^f_s, the estimation q̅_̅s̅ and the real queue-size Q_s.The large degree of concurrency in key-value access is considered as one of the main reason for this phenomenon, and accordingly the term n*os_s is utilized as the concurrency compensation in the computation of the estimated queue-size q̅_s, as represented in C3 <cit.>. However, the term n*os_s has not helped to improve the estimation accuracy of the queue-size, as illustrated in <ref>.In fact, dividing the data of <ref> into two sub figures with threshold 100 ms on τ_w, we show it is the poor timeliness of feedback information that leads to the poor estimation accuracy of the queue-size. Specifically, as shown in <ref>, the difference among the real queue-size Q_s, its estimation q̅_s and the feedback queue-size Q^f_s is small when τ_w≤ 100 ms, excepting the condition that os_s is nonzero. When the value of τ_w becomes in the order of hundreds of milliseconds, the real queue-size Q_s can change greatly during such a large time interval, and thus cann't be estimated based on the old feedback information. Therefore, when τ_w is large, the replica selection methods independent of feedback information are needed. Similarly, the timeliness of the feedback service rate of servers may also becomes poor frequently. Furthermore, when τ_w is small, the queue-size may also change a lot due to the large degree of concurrency in key-value access. The term n*os_s can not properly represent the degree of concurrency, asthe degree of concurrency will be constrained by the rate control algorithm. Hence, it is not reasonable to assign the weight n to os_s. In fact, we find the term n*os_s is helpful in simulation, not because it compensates the impact of concurrency and makes the queue-size estimation better, but because that the corresponding server should not be chosen before the outstanding keys are served and the feedback information is piggybacked and renewed. To improve the queue-size estimation in this condition, we suggest to piggyback some better variables as the feedback information except for Q^f_s and T_s. §.§ Rate ControlWe also find that the timeliness of feedback information has great impact on the rate control of C3. Although the rate adjustment is executed immediately after the feedback information is received from a server, this rate adjustment doesn't make sense if the client doesn't send any key to this server for a relatively long time interval. This is much different from the congestion control of Internet, which assumes there are always data to send. Even if the client sends keys to this server right after the rate adjustment, i.e., the rate adjustment results take effect on time, the congestion control algorithms faces a forward time delay τ_w, which denotes the timeliness of feedback information. Note that the value of τ_w can change in a very large range, i.e., from several milliseconds to hundreds of milliseconds. This kind of delay would has great impacts on the stability of rate control algorithms. This may be why C3 adopts the CUBIC algorithm, which is designed for networks with large bandwidth delay product.Moreover, as clients may send or not send keys to a server at any time, the number of senders (clients) of each server also varies in a large range. Because the feedback information is only piggybacked in the returned value, the server can only notify one client for rate adjustment after the service of one key. Note that the feedback information may not take effect timely as discussed above. Moreover, a server needs to ask many clients for rate adjustment in order to reduce or increase the incoming rate of keys once. Therefore, a large amount of time is needed if a server want to adjust the incoming rate of keys once. This is also different from the traditional congestion control of Internet.Although the distributed rate control is inspired by congestion control of Internet, the goal of rate control in C3 is not suitable for the key-value stores. Specifically, in C3, the rRate_s is used to represent the perceived performance of a server s, and the goal is to adapt the sRate_s to the rRate_s at clients. The benefit is that no feedback is needed, because the rRate_s can be independently measured at client. However, the rRate_s can only reflect the service capacity of server s allocated to this client, while the service capacity of server s is competed by many clients, as it accepts keys from many different clients. The rRate_s may not be able to reflect the total service capacity of servers. This is different from the CUBIC algorithm for Internet, which adapt the sending rates of all clients to the total service capacity of servers.In CUBIC, the total service capacity of servers is reflected by whether the buffer overflows.Hence, the goal of rate control in C3 should be revised. Although CUBIC , the high speed, instead of the large latency, is focused. Therefore, the Cubic rate control function would not be the suitable one for key-value stores due to such a large delay. A rate control algorithm designed for large latency is more suitable. In a word, we reveal the timeliness issue of the replica selection framework developed by C3, and the drawbacks of C3 on the replica ranking and the goal of rate control.§ DESIGN OF TARSMotivated by above insights, we design the Tars scheme, which follows the same framework as C3, but improves the replica ranking and rate control methods. The specific improvements are as follows.§.§ Timeliness-aware Replica RankingThe procedure of replica ranking of Tars is the same as that of C3, illustrated in <ref>. But Tars hasdifferent feedback information and scoring methods.Feedback Information In contrast to C3, where the queue length Q^f_s and the service time T_s are piggybacked, Tars utilizes the following feedback information: the queue length Q^f_s, the incoming rate of keys λ_s, the service rate μ_s and the time of the key staying at the server τ^s_w. Obviously, τ^s_w is the sum of the service time T_s and the queuing time of the key at the server. Note that μ_s is different from T^-1_s when the server can concurrently process several keys, as discussed in part A of section V. The T_s is never used again, and replaced by τ^s_w and μ_s in TarsTimeliness of Feedback As discussed in section III, the timeliness of feedback is represented by τ=τ_d+τ_w.Obviously, the duplex network delay can be computed by τ_d=R_s-τ^s_w, where R_s is the response time witnessed by client, but without EWMAs, and τ^s_w is involved in the feedback information. Moreover, the initialization of time interval τ_w is the time when a client receives a returned value and the feedback information is extracted. The end of time interval τ_w is the current time when a new key is going to be sent based on the replica ranking utilizing this feedback information. Because τ_d is only in the order of several milliseconds and τ_w can become as large as hundreds milliseconds, Tars mainly uses τ_w to represent the timeliness of feedback information. When the timeliness of feedback information is poor, Tars develops a scoring method independent of feedback information. Conversely, Tars is inclined to estimate the queue-size of waiting keys and the service rate more accurately, and employs a scoring method similar to C3. Referring to the dynamic snitch mechanism of Cassandra, the 100 ms is chosen to be the boundary of utilizing different scoring methods.Queue-size Estimation When τ_w≤100 ms, the scoring method based on queue-size estimation method is adopted in Tars. Specifically, Tars assumes both λ_s and μ_schanges a little during time interval τ_d in this condition, and then computes the queue-size with the following approximation.Q^f_s+(λ_s-μ_s)*τ_d≈ Q_swhere Q_s is the real queue-size of waiting keys at server s. Note that τ_w is not involved in (<ref>), because the rates λ_s and μ_smay change in a relatively large time interval due to the large degree of concurrency. Obviously, equation (<ref>) is also hard to accurately estimate the real queue-size. But comparing equation (<ref>) to equation (<ref>), where the queue-size estimation of C3 becomes 1+q_s without taken the term os_s into consideration, the term (λ_s-μ_s)*τ_d can be considered as the concurrency compensation and equation (<ref>) can be a better queue-size estimation method than 1+q_s. In addition, similar to C3, the term n*os_s is also added in the queue-size estimation of Tars, based on the intuitive viewpoint “the replica server is not preferred when there are already keys sent to this server but without returned value”, instead of being considered as the concurrency compensation. In total, the queue-size estimation about server s in Tars is.q̅_̅s̅=Q^f_s+(R_s-τ^s_w)(λ_s-μ_s)+ n*os_sNote that different from C3, all variables are utilized directly without EWMAs in equation (<ref>), excepting λ_s and μ_s, because the EWMAs brings in some more staler feedback information.Reproducing the simulations of C3, we randomly choose a client and a server, and compare the estimation accuracy of C3 and Tars. As shown in <ref>, Tars outperforms C3. Scoring with Feedback When τ_w≤100 ms, the replica ranking of Tars uses the following scores based on the queue-size estimation (<ref>).Ψ_s=R_s-τ^s_w+q̅_̅s̅^3/μ_sCompared with equation (<ref>) and (<ref>), we can find that the difference between the scoring methods ofC3 and that of Tars are triple. * First, the term T_s is replaced by τ^s_w, i.e., the waiting time of the key at server is not considered as the access latency in Tars, because q̅_̅s̅^3/μ_s stands for it.* Second, the queue-size estimation methods are different from each other, as Tars takes the timeliness of feedback information into consideration.* Third, as the server can concurrently process several keys, the service rate is measured independently in Tars, instead of using the reciprocal of the service time T^-1_s. Scoring without Feedback When τ_w > 100 ms, the feedback information become useless with the time elapse, and Tars develops the following scoring method without feedback for this condition.Obviously, τ_w > 100 ms indicates that the client has not sent any keys to server s for a long time. Let f_s be the number of times that the replica server s is not selected during the time interval τ_w recorded by client. When os_s=0 and f_s=0, there is no key to be sent to the group of replica servers, where server s belongs to, for a long time τ_w due to the traffic pattern. The client tends to send current key to server s in this condition. When os_s=0 and f_s>6, the replica server s has not been selected for many times in a long time τ_w, we send a key to this replica server to try whether this performance of this replica server has recovered. Or else, Tars uses the same queue-size estimation method (<ref>) as C3, because we don't have any more information.Putting everything together, we can obtain the detailed scoring method of Tars utilized in replica ranking before sending keys, as shown in Algorithm <ref>.§.§ Rate ControlAs discussed in section III, the goal of the rate control in Tars is changed to adapt the sending rates of clients to the service rate of servers. It means the sending rate of a client is decreased or increased based on whether the server is saturated or not in Tars.Rate Decrease The saturation state of servers, or the service capacity of servers can be reflected by whether the queue-size Q^f_s is larger than a predefined value, i.e., whether there is a “ buffer overflow ”. Different from C3, where the sRate_s is decreased when sRate_s > rRate_s, Tars decreases the sRate_s when the queue-size Q^f_s exceeds a predefined value B=5, corresponding to the packet drops resulted by buffer overflows in the congestion control of the Internet.The same as CUBIC and C3, the multiplicative rate decrease method is employed here, i.e., sRate_s←β*sRate_s,where β is a fixed coefficient smaller than 1.Rate Increase In contrast that the sending rate is increased periodically after the rate decrease in the Cubic congestion control of the Internet, Tars does not increase the sRate_s whenever sRate_s≥ rRate_s. This is because rRate_s reflects the real sending rate of client to server s, and sRate_s is the boundary of the rate limiter for server s. When sRate_s≥ rRate_s, all the keys can be sent without rate limiting, and thus it's meaningless to further increase the value of sRate_s in this condition. Therefore, sRate_s is only increased when it's smaller than rRate_s in Tars.Putting above viewpoints together, we can obtain the detailed rate control algorithm of Tars, as shown in Algorithm <ref>. In fact, the rate control algorithm <ref> is almost the same as that of C3. The major difference is that the judgement condition for rate decrease (step <ref>) is replaced by the step <ref>. Another improvement made by Tars is in step 7, which ensures that the target value R_0 for rate increase never reaches the lower bound value of sRate_s. In addition, step 1 and step 2 are newly added in Tars.§.§ DiscussionCompared to C3, Tars utilizes the same framework and has similar replica ranking and rate control methods. Hence, Tars is also simple and implementable, can avoid the herd behavior, and is adaptive to the time-varying performance across servers, similar to C3.In reality, because of the large degree of concurrency and the poor timeliness of feedback, it's hard to accurately estimate the queue-size of waiting keys of servers, especially when τ_w > 100 ms. Note that when q̅_̅s̅ is always set as 0, both C3 and Tars will degenerate to the replica selection scheme, where the server with the smallest network latency is chosen. Obviously, there is larger probability to obtain a smaller estimation error when q̅_̅s̅ is set the value of feedback queue-size Q^f_s, compared with letting q̅_̅s̅=0. Moreover, we believe the queue-size estimation equation (<ref>) is better than equation (<ref>), when τ_w≤ 100 ms. Excepting the goal of rate control, the rate control algorithm for key-value store also suffer from the timeliness issue as discussed in section III. Therefore, there is chance to improve the rate control algorithm for key-value stores. In this paper, we just revise the goal of rate control in C3, but leave the improvement on rate control algorithm as the further work. Even with this small modification, the rate control of Tars becomes better than that of C3, as confirmed by simulation results in section V.The distribution of τ_w is impacted by several factors. The most intuitive factors are as follows. First, the larger the workload, the larger probability that τ_w is of small values, as shown in <ref>. In addition, the larger the number of clients, the smaller probability that τ_w is of small values, as the time interval for a client to receive feedback information becomes large. § EVALUATION§.§ Implementation and SetupSetup We implement Tars based on the open source code of C3 <cit.>. As in C3, the workload generators create keys at a set of clients according to a Poisson arrival process to mimic arrival of user requests at web servers <cit.>. These keys are sent to a set of servers, each of which is chosen according to the replica selection algorithm at client from 3 replica servers. The server maintain a FIFO queue for waiting keys, but can serve a tunable number (4 by default) of requests in parallel. The service time of each key is drawn from an exponential distribution with a mean service time T_s=4 ms as in <cit.>. The time-varying performance fluctuations of servers is simulated by a bimodal distribution <cit.> as follows: each server sets its mean service rate either to T^-1_s or to D*T^-1_s with uniform probability every fluctuation interval T ms, where D is a range parameter with default value 3. The arrival rate of keys corresponds to 70% (high utilization scenario, used by default) and 45% (low utilization scenario) of the average service rate of the system. Service Rate Different from C3, we mainly modify the feedback information, the replica ranking and the rate control. Specifically, we revise the measurement method of service rate in C3. In the code of C3, the service time of one key is returned directly and its reciprocals is considered as the service rate. But each server serves keys in parallel to model the concurrent processing of multicore computer. The macroscopical service rate of server is larger than the reciprocals of the service time of one key. Therefore, to measure the service rate, we count the number of keys served during the service time of one key and piggyback it to the returned values for this key-value access. Not that the service time may be very small such that there is no key served. In this condition, we take the number of keys served in two consecutive service time into consideration. Similar method is used to measure the incoming rates λ_s of keys at server. Note that λ_s and μ_s are always measured within the same time interval.EWMAs In C3, the EWMAs of feedbacks are utilized to replace the original ones at clients. However, as the consecutive feedbacks are sent to different clients at a server, there may be a great difference between the old feedbacks and the fresh one. Hence, Tars utilize the feedbacks directly, excepting that the EWMAs of λ_s and μ_s are computed at server before they are piggybacked. Configuration The configurations and metrics are the same as C3. 200 workload generators, 50 servers and 150 clients are used by default. The one-way network latency is 250 μ s. The parameters of the Cubic function are β=0.2, γ is set to be 0.000004 such that the saddle region of Cubic function is 100 ms, the unit of Δ T is ms and s_max=10. The 99^th percentile latency is computed by taking the average of 5 repeated experiments, where different random seedsare set and 600,000 keys are generated in each run. Without declared explicitly, the high utilization scenario with T=500 ms is used. Comparative We mainly compare Tars to C3, as well as the following Oracle strategy. With the Oracle strategy, each client is assumed to has perfect knowledge of the instantaneous value Q_s/μ_s at each replica server. Note that the Oracle strategy may be composed with rate control methods of C3 or Tars, named as ORA_c and ORA_r respectively. For more detailed comparison, we also compose the timeliness-aware replica ranking of Tars and the rate control of C3 as one of the comparative, named TRR. §.§ Simulation ResultsIn all the following simulation results, the 99^th percentile latencies of C3 is almost the same as that in the Fig.14 and Fig. 15 of <cit.>. This can serve as the evidence for the correctness of our implementation. Impacts of Time-varying Service Rate.As both C3 and Tars are designed to be adaptive with the time-varying performance across servers, we evaluates Tars with time varying service rate at first. With the fluctuation interval T of the average service time of servers changes from 500 ms to 10 ms, the 99^th percentile latencies are shown in <ref>. With the same rate control and backpressure mechanism of C3 but different replica ranking methods, the 99^th percentile latencies of schemes satisfy ORA_c ≪ TRR < C3. It indicates that the tail latency can be cut greatly with perfect knowledge of the queue-size and the service time, but the queue-size estimation of both C3 and Tars are not very good, as discussed in part C of section IV.But the timeliness-aware replica ranking method of Tars is a little better than that of C3, as illustrated in <ref>. Note that the difference among the 99^th percentile latencies becomes significant, when the time interval T is a large value like 500 ms, i.e., the average service time of servers is not changed frequently. When T=10 ms, i.e., the average service time of servers changes frequently, the feedback information becomes stale very fast. Correspondingly, the replica ranking based on feedback information becomes poor, and the rate control can't adapt to the rapid change of service capacity in both Tars and C3. Therefore, the difference between Tars and C3 is small with T=10 ms.Similarly, with the same replica ranking method but different goals of rate control, the 99^th percentile latency of schemes satisfy ORA_r<ORA_c and TRR<C3. It indicates the rate control method of Tars is a little better than that of C3, with the revised goal of rate control. Especially when T=500 ms, the rate control method of Tars is helpful when it cooperates with the ORA strategy.Finally, combining the timeliness-aware replica ranking and the revised goal of rate control, Tars always outperforms C3, as shown in <ref>.Latency To compare the performance of C3 and Tars in detail, we also illustrate the 50^th percentile latencies, the 95^th percentile latencies and the 99.9^th percentile latencies in <ref>, when T=500 ms. Under all of these metrics, Tars outperforms C3, and the advantage of Tars becomes the most significant with the metric 99.9^th percentile latency. In fact, the CDF of the latencies of all key-value accesses can illustrate the advantage of Tars over C3 better, as shown in <ref>.Impacts of the Number of Clients Subsequently, we increase the number of clients to be n=300 under the default high utilization scenario. The corresponding 99^th percentile latencies are shown in <ref>. As discussed in the part C of section IV, the τ_w would has smaller probability to be of small values in this condition. This conclusion is confirmed by <ref>, where the cumulative distribution function of τ_w with n=300 are presented, similar to <ref>. When the τ_w is often of large values, the queue-size estimation will become worse and the rate adjustment result has to wait for a longer time before it takes effect. Therefore, the 99^th percentile latencies illustrated in <ref> become larger than that in <ref>, respectively. But they have the same variation tendency with the change of the time intervalT. Moreover, in these conditions, Tars also outperforms C3.Impacts of the Sever Utilization Next, we repeat above simulations under the low utilization scenario, where the arrival rate matches a 45% server utilization. The 99^th percentile latencies are shown in <ref>. Comparing with above simulation results, the 99^th percentile latencies of both Tars and C3 are seldom influenced by the changes of the period, where the average service time changes, under the low utilization scenario. Because once a server becomes slow according to the time-varying performance model, it is unlikely to be chosen by Tars and C3, as the other fast servers are unlikely saturated in this situation.Consequently, this slow server contributes little to the 99^th percentile latencies. On the other hand, similar to above result, we can find the 99^th percentile latencies increase with the increase of the number of clients in <ref>. In addition, Tars outperforms C3 in <ref>, especially when the number of clients becomes n=300.Impacts of the Skewed Demands As many realistic workloads are skewed in practice <cit.>, we evaluate Tars under the skewed client demands. Specifically, we respectively let 20% or 50% of the clients generate 80% of the total keys towards the servers. The 99^th percentile latencies are shown in <ref> and <ref>, respectively. Consisting with above simulation results, Tars outperforms C3 under all of these two skewed demands scenarios. In summary, Tars outperforms C3 under all kinds of conditions. The advantages of Tars over C3 is not very significant, because Tars is designed based on C3 with only a few modifications, and Tars is also unable to totally address the timeliness issue of the framework developed in C3. § CONCLUSION AND FURTHER WORKNowadays, it is crucial to select the fastest replica server via the replica selection scheme, such that the tail latency of key-value accesses is reduced. To address the challenges of the time-varying performance across servers and the herd behavior, an adaptive replica selection scheme C3 is proposed recently. Despite of the innovations on bringing in the feedback for replica ranking and developing the rate control and backpressure mechanism, and the good performance of C3, we find drawbacks of C3 in respect of poor queue-size estimation and unsuitable goal of rate control, and reveal the timeliness issue of the framework developed by C3. These insights motivate us to further develop the Tars scheme, improving the replica ranking by taking the timeliness of feedback information into account, and revising the goal of rate control. Evaluation results based on the open source code of C3 confirm the good performance of Tars against C3.Further work can be, but not limited to, evaluation of Tars with real experiments, totally addressing the timeliness issue of the framework developed by C3, and improvement of the rate control algorithm for key-value stores. Abstract+Introduction 1.25 Background 1 Motivation 1.5 1.25 Algorithm2 1.5 Evaluation 2 2.5???? Conclusion+Ref 1 § ACKNOWLEDGEMENTThe authors gratefully acknowledge the anonymous reviewers for their constructive comments. This work is supported in part by the National Natural Science Foundation of China (NSFC)under Grant No. 60971102 and 60932003, National Basic Research Program of China (973 Program) under Grant No. 2009CB320504 and 2012CB315803, and National Science and Technology Major Project of China (NSTMP) under Grant No.2011ZX03002-002-02.1 Dynamo G. Decandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S. Sivasubramanian, P. Vosshall, and W. Vogels, Dynamo: Amazons Highly Available Key-value Store, In Proc. of the SOSP, 2007.bing V. Jalaparti, P. Bodik, S. Kandula, I. Menache, M. Rybalkin, and C. Yan, Speeding up Distributed Request-Response Workflows, In Proc. of the SIGCOMM, 2013.facebook R. Nishtala, H. Fugal, S. Grimm, M. Kwiatkowski, H. Lee, H. C. Li, R. McElroy, M. Paleczny, D. Peek, P. Saab, D. Stafford, T. Tung, and V. Venkataramani,Scaling Memcache at Facebook, In Proc. of the NSDI, 2013.latency S. M. Rumble, D. Ongaro, R. Stutsman, M. Rosenblum, and J. K. Ousterhout, Its time for low latency, In Proc. of the HotOS, 2011.AtScale J. Dean and L. A. Barroso, The Tail At Scale, Communications of the ACM, Volumn 56:74-80, 2013.revenue J. Brutlag, Speed Matters, <http://googleresearch.blogspot.com/2009/06/speed-matters.html>, 2009redundancy A. Vulimiri, P. B. Godfrey, R. Mittal, J. Sherry, S. Ratnasamy, and S. Shenker, Low Latency via Redundancy, In Proc. of the CoNEXT, 2013.CosTLO Z. Wu, C. Yu, and H. V. Madhyastha, CosTLO: Cost-Effective Redundancy for Lower Latency Variance on Cloud Storage Services, In Proc. of the NSDI, 2015.HDFS D. Borthakur, The hadoop distributed file system: Architecture and design, Hadoop Project Website, 11(11):1-10, 2007.Riak Riak Load Balancing and Proxy Configuration, <http://docs.basho.com/riak/1.4.0/cookbooks/Load-Balancing-and-Proxy-Configuration/>, 2014.Cassandra Cassandra Documentation, <http://www.datastax.com/documentation/cassandra/2.0>, 2014.C3 L. Suresh, M. Canini, S. Schmid, and A. Feldmann, C3: Cutting Tail Latency in Cloud Data Stores via Adaptive Replica Selection, In Proc. of the NSDI, 2015.Cubic S. Ha, I. Rhee, and L. Xu, CUBIC: A New TCP-Friendly High-Speed TCP Variant, SIGOPS Oper. Syst. Rev., 42(5), 2008.tcp V. Jacobson, Congestion Avoidance and Control, In Proc. of the SIGCOMM, 1988server-model J. Schad, J. Dittrich, and J.-A. Quiané-Ruiz, Runtime Measurements in the Cloud: Observing, Analyzing, and Reducing Variance, VLDB Endowment, 3(1-2), 2010 time-varying M. Kambadur, T. Moseley, R. Hank, and M. A. Kim, Measuring Interference Between Live Datacenter Applications, In Proc. of the SC, 2012.congestion K. Ousterhout, R. Rasti, S. Ratnasamy, S. Shenker and B. Chun, Making Sense of Performance in Data Analytics Frameworks, In Proc. of the NSDI, 2015mongodb K. Bogdanov, M. Peon-Quirós, G. Q. Maguire Jr. and D. Kostić, The Nearest Replica Can Be Farther Than You Think, In Proc. of the SoCC, 2015skew B. Atikoglu, Y. Xu, E. Frachtenberg, S. Jiang, and M. Paleczny, Workload Analysis of a Large-scale Key-value Store, In Proc. of the SIGMETRICS, 2012 burst S. Kandula, S. Sengupta, A. Greenberg, and P. Patel, The nature of datacenter traffic: Measurements and Analysis, in Proc. of the ICM, 2009.RoCEE D. Cohen, T. Talpey, A. Kanevsky, U. Cummings, M. Krause, R. Recio, D. Crupnicoff, L. Dickman and P. Grun, Remote Direct Memory Access over the Converged Enhanced Ethernet Fabric: Evaluating the Options, In Proc. of the High Performance Interconnects, 2009.sliding U. Itkis, Control systems of variable structure, by Keter Publishing House Jerusalem Ltd. 1976 ode1 E.A. Coddington and N. Levionson, Theroy of Ordinary Differential Equations, McGraw Hill, 1975.nonlinear Derek P. Atherton, Nonlinear Control Engineering, Van Nostrand Reinhold Company, 1982.
http://arxiv.org/abs/1702.08172v1
{ "authors": [ "Wanchun Jiang", "Liyuan Fang", "Haiming Xie", "Xiangqian Zhou", "Jianxin Wang" ], "categories": [ "cs.DC" ], "primary_category": "cs.DC", "published": "20170227080223", "title": "Tars: Timeliness-aware Adaptive Replica Selection for Key-Value Stores" }
label1]Yugo Nakayama [label1]Graduate School of Pure and Applied Sciences, University of Tsukuba, Ibaraki, Japanlabel2]Kazuyoshi Yata [label2]Institute of Mathematics, University of Tsukuba, Ibaraki, Japanlabel2]Makoto Aoshimalabel3[label3]Institute of Mathematics, University of Tsukuba, Ibaraki 305-8571, Japan; Fax: +81-29-853-6501 aoshima@math.tsukuba.ac.jpIn this paper, we consider asymptotic properties of the support vector machine (SVM) in high-dimension, low-sample-size (HDLSS) settings.We show that the hard-margin linear SVM holds a consistency property in which misclassification rates tend to zero as the dimension goes to infinity under certain severe conditions.We show that the SVM is very biased in HDLSS settings and its performance is affected by the bias directly.In order to overcome such difficulties, we propose a bias-corrected SVM (BC-SVM).We show that the BC-SVM gives preferable performances in HDLSS settings.We also discuss the SVMs in multiclass HDLSS settings. Finally, we check the performance of the classifiers in actual data analyses.Distance-based classifier HDLSS Imbalanced data Large p small n Multiclass classificationprimary 62H30 secondary 62G20§ INTRODUCTION High-dimension, low-sample-size (HDLSS) data situations occur in many areas of modern science such as genetic microarrays, medical imaging, text recognition, finance, chemometrics, and so on.Suppose we have independent and d-variate two populations, π_i, i=1,2, having an unknown mean vector _i and unknown covariance matrix _i (≥).We assume that (_i)/d ∈ (0,∞) as d→∞ for i=1,2.Here, for a function, f(·), “f(d) ∈ (0, ∞) as d→∞" implies lim inf_d→∞f(d)>0 and lim sup_d→∞f(d)<∞.Let Δ=_1-_2^2, where · denotes the Euclidean norm.We assume that lim sup_d→∞Δ/d<∞.We have independent and identically distributed (i.i.d.) observations, _i1,...,_in_i, from each π_i. We assume n_i≥ 2, i=1,2.Let _0 be an observation vector of an individual belonging to one of the two populations.We assume _0 and _ijs are independent. Let N=n_1+n_2. In the HDLSS context, <cit.>, <cit.> and <cit.> considered distance weighted classifiers. <cit.>, <cit.> and <cit.> considered distance-based classifiers. In particular, <cit.> gave the misclassification rate adjusted classifier for multiclass, high-dimensional data in which misclassification rates are no more than specified thresholds.On the other hand, <cit.> considered geometric classifiers based on a geometric representation of HDLSS data.<cit.> considered a classifier based on the maximal data piling direction.<cit.> considered quadratic classifiers in general and discussed asymptotic properties and optimality of the classifies under high-dimension, non-sparse settings.In particular, <cit.> showed that the misclassification rates tend to 0 as d increases, i.e., e(i)→ 0under the non-sparsity such as Δ→∞ as d→∞, where e(i) denotes the error rate of misclassifying an individual from π_i into the other class.We call (<ref>) “the consistency property". We note that a linear classifier can give such a preferable performance under the non-sparsity.Also, such non-sparse situations often appear in real high-dimensional data.See <cit.> for the details.Hence, in this paper, we focus on linear classifiers.In the field of machine learning, there are many studies about the classification in the context of supervised learning.A typical method is the support vector machine (SVM).The SVM has versatility and effectiveness both for low-dimensional and high-dimensional data.See <cit.>, <cit.>, <cit.>, <cit.> and <cit.> for the details.Even though the SVM is quite popular, its asymptotic properties seem to have not been studied sufficiently. In this paper, we investigate asymptotic properties of the SVM for HDLSS data. Now, let us use the following toy examples to see the performance of the hard-margin linear SVM given by (<ref>).We set N=20 and d=2^s, s=5,...,11.Independent pseudo random observations were generated from π_i: N_d(_i, _i), i=1,2. We set _1= and _2=(1/3,...,1/3)^T, so that Δ=d/9.We considered three cases:(a) (n_1,n_2)=(10,10) and _1=_2=_d; (b) (n_1,n_2)=(6,14) and _1=_2=_d; and (c) (n_1,n_2)=(10,10), _1=0.6_d and _2=1.4_d, where _d denotes the d-dimensional identity matrix. Note that Δ>|(_1)/n_1-(_2)/n_2| for (a) to (c). Then, from Theorem 1 in <cit.>, the classifier should hold (<ref>) for (a) to (c). We repeated 2000 times to confirm if the classifier does (or does not) classify _0∈π_i correctly and defined P_ir=0 ( 1) accordingly for each π_i (i=1,2). We calculated the error rates, e(i)= ∑_r=1^2000P_ir/2000, i=1,2.Also, we calculated the average error rate, e={e(1)+e(2)}/2.Their standard deviations are less than 0.0112 from the fact that {e(i)}=e(i){1-e(i)}/2000≤ 1/8000.In Figure <ref>, we plotted e(1), e(2) and e for (a) to (c). We observe that the SVM gives a good performance as d increases for (a).Contrary to expectations, it leads undesirable performances both for (b) and (c).The error rates becomes small as d increases, however, e(1) and e(2) are quite unbalanced.We discuss some theoretical reasons in Section 2.2.In this paper, we investigate the SVM in the HDLSS context. In Section 2, we show that the SVM holds (<ref>) under certain severe conditions.We show that the SVM is very biased in HDLSS settings and its performance is affected by the bias directly.In order to overcome such difficulties, we propose a bias-corrected SVM (BC-SVM) in Section 3. We show that the BC-SVM improves the SVM even when n_is or _is are unbalanced as in (b) or (c) in Figure 1.In Section 4, we check the performance of the BC-SVM by numerical simulations and use the BC-SVM in actual data analyses. In Section 5, we discuss multiclass SVMs in HDLSS settings. § SVM IN HDLSS SETTINGS In this section, we give asymptotic properties of the SVM in HDLSS settings. Since HDLSS data are linearly separable by a hyperplane, we consider the hard-margin linear SVM.§.§ Hard-margin linear SVM We consider the following linear classifier:y()=^T+b,whereis a weight vector and b is an intercept term.Let us write that (_1,...,_N)=(_11,...,_1n_1,_21,...,_2n_2). Let t_j=-1 for j=1,...,n_1 and t_j=1 for j=n_1+1,...,N.The hard-margin SVM is defined by maximizing the smallest distance of all observations to the separating hyperplane. The optimization problem of the SVM can be written as follows:_,b1/2^2A Lagrangian formulation is given by L(,b;)=1/2|| ||^2-∑_j=1^Nα_j{t_j(^T_j+b)-1},where =(α_1,...,α_N)^T and α_js are Lagrange multipliers.By differentiating the Lagrangian formulation with respect toand b, we obtain the following conditions: =∑_j=1^Nα_jt_j _j ∑_j=1^Nα_jt_j=0.After substituting them into L(,b;), we obtain the dual form: L()=∑_j=1^Nα_j-1/2∑_j=1^N∑_k=1^Nα_jα_kt_jt_k _j^T_k.The optimization problem can be transformed into the following:_ L()subject to α_j≥ 0, j=1,...,N, ∑_j=1^Nα_jt_j=0.Let us write that =(α̂_1,...,α̂_N)^T=_L() .There exist some _js satisfying that t_jy(_j)=1 (i.e., α̂_j≠ 0). Such _js are called the support vector. Let Ŝ={j|α̂_j≠ 0, j=1,...,N} and N_Ŝ=#Ŝ, where # A denotes the number of elements in a set A. The intercept term is given byb̂=1/N_Ŝ∑_j∈Ŝ(t_j-∑_k∈Ŝα̂_kt_k_j^T_k).Then, the linear classifier in (<ref>) is defined byŷ()=∑_k∈Ŝα̂_k t_k _k^T+b̂.Finally, in the SVM, one classifies _0 into π_1 if ŷ(_0 )<0 and into π_2 otherwise.See <cit.> for the details. §.§ Asymptotic properties of the SVM in the HDLSS context In this section, we consider the case when d→∞ while N is fixed.We assume the following assumptions:(A-i)(_ik-_i^2)/Δ^2→ 0 as d→∞ for i=1,2; (A-ii)(_i^2)/Δ^2→ 0 as d→∞ for i=1,2.Note that (_ik-_i^2)=2(_i^2) when π_i is Gaussian, so that (A-i) and (A-ii) are equivalent when π_is are Gaussian. Under (<ref>), it holds that as d→∞L()=∑_j=1^Nα_j- Δ/8(∑_j=1^Nα_j)^2{1+o_p(1)} -1/2((_1)∑_j=1^n_1α_j^2+ (_2)∑_j=n_1+1^Nα_j^2).Let δ=(_1)/n_1+(_2)/n_2 and Δ_*=Δ+δ.Under the constraint that ∑_j=1^Nα_j=C for a given positive constant C, we can claim thatmax_{-1/2( (_1)∑_j=1^n_1α_j^2+(_2)∑_j=n_1+1^Nα_j^2)} =-C^2/8δwhen α_1=⋯ =α_n_1=C/(2n_1) and α_n_1+1=⋯ =α_N=C/(2n_2) under (<ref>).Then, by noting that lim inf_d→∞{(_i)/(Δ n_i)} >0 for i=1,2,from Lemma 1 it holds that max_ L()= -Δ_*/8(C-4+o_p(1)/Δ_*)^2{1+o_p(1)}+2+o_p(1)/Δ_*for given C(>0). Hence, by choosing C≈ 4/Δ_*, we have the maximum of L() asymptotically.It holds that as d→∞α̂_j=2/Δ_*n_1{1+o_p(1)};α̂_j=2/Δ_*n_2{1+o_p(1)}. Furthermore, it holds that as d→∞ŷ(_0)=(-1)^iΔ/Δ_*+(_1)/n_1-(_2)/n_2/Δ_*+o_p(Δ/Δ_*)From Lemma 2, all the data points are the support vectors under (A-i) and (A-ii) in the HDLSS context.<cit.> called this phenomenon the “data piling".See Sections 1 and 2 in <cit.> for the details.Let κ =(_1)/n_1-(_2)/n_2.From Lemma 2, it holds that as d→∞Δ_*/Δŷ(_0)=(-1)^i+κ/Δ+o_p(1)when _0∈π_i, i=1,2.Hence, “κ /Δ" is the bias term of the (normalized) SVM.We consider the following assumption:(A-iii)lim sup_d→∞|κ|/Δ<1.Under (A-i) to (A-iii), the SVM holds (<ref>). Under (A-i) and (A-ii), the SVM holds the following properties:e(1)→ 1 e(2)→ 0lim inf_d→∞κ/Δ>1; e(1)→ 0 e(2)→ 1lim sup_d→∞κ/Δ<-1. For the SVM, <cit.> and <cit.> also showed (<ref>) and the results in Corollary 1 under different conditions.We emphasize that (A-i), (A-ii) and (A-iii) are milder than their conditions.Moreover, we can evaluate the bias of the SVM by using (<ref>).We expect from (<ref>) that, for sufficiently large d, e(1) and e(2) for the SVM become small and e(1) (or e(2)) is larger than e(2) (or e(1)) if κ/Δ>0 (or κ/Δ<0).Actually, in Figure 1, we observe that e(1) is larger than e(2) for (b) in which κ/Δ=6/7 and e(2) is larger than e(1) for (c) in which κ/Δ=-18/25.As for (a) in which κ =0, the SVM gives a preferable performance.§.§ Asymptotic properties of the SVM when both d and N tend to infinityIn this section, we give asymptotic properties of the SVM when both d,N →∞ while N/d→ 0. One may consider N=O(log d) for example.We assume the following assumptions:(A-i')N (_ik-_i^2)/Δ^2→ 0 as d,N→∞ for i=1,2; (A-ii')N^2 (_i^2)/Δ^2→ 0 as d,N→∞ for i=1,2; (A-iv)lim inf_d,N→∞(_i)/Δ n_i> 0 for i=1,2.Note that Δ^2/(_i^2)=O(d) from the facts that lim sup_d→∞Δ/d<∞and (_i)/d∈ (0,∞) as d→∞ for i=1,2.Thus, N=o(d^1/2) when (A-ii') is met.Under (A-i'), (A-ii') and (A-iv), it holds that as d,N→∞ŷ(_0)=(-1)^iΔ/Δ_*+κ/Δ_*+o_p(Δ/Δ_*) Under (A-i'), (A-ii') and (A-iv), the SVM holds the following properties:e(1)→ 0 e(2)→ 0lim sup_d,N→∞|κ|/Δ<1; e(1)→ 1 e(2)→ 0lim inf_d,N→∞κ/Δ>1; e(1)→ 0 e(2)→ 1lim sup_d,N→∞κ/Δ<-1. § BIAS-CORRECTED SVM As discussed in Section 2.2, if lim inf_d→∞|κ|/Δ>0, the SVM gives an undesirable performance.From Corollary 1, if lim inf_d→∞|κ|/Δ>1, one should not use the SVM.In order to overcome such difficulties, we consider a bias correction of the SVM.We estimate _i and _i by _in_i=∑_j=1^n_i_ij/n_i and _in_i=∑_j=1^n_i(_ij-_in_i)(_ij-_in_i)^T/(n_i-1). We estimate Δ_* by Δ̂_*=_1n_1-_2n_2^2. Note that E(Δ̂_*)=Δ_*.Let κ̂=(_1n_1)/n_1-(_2n_2)/n_2.Note that E(κ̂)=κ.First, we consider the case when d→∞ while N is fixed.Under (A-i) and (A-ii), it holds that as d→∞ κ̂/Δ̂_*= κ/Δ_*+o_p(Δ/Δ_*).Now, we define the bias-corrected SVM (BC-SVM) byŷ_BC(_0)=ŷ(_0)-κ̂/Δ̂_*,where ŷ(_0) is given by (<ref>). In the BC-SVM, one classifies _0 into π_1 if ŷ_BC(_0)<0 and into π_2 otherwise. By combining (<ref>) with Lemma 4, under (A-i) and (A-ii), it holds that as d→∞Δ_*/Δŷ_BC(_0)=(-1)^i+o_p(1)when _0∈π_i, i=1,2.Under (A-i) and (A-ii), the BC-SVM holds (<ref>). One should note that the BC-SVM has the consistency property without (A-iii).<cit.> considered a different bias correction for the SVM.They showed the consistency property under some stricter conditions than (A-i) and (A-ii). <cit.> considered the distance-based classifier as follows:One classifies an individual into π_1 if y_AY(_0)<0 and into π_2 otherwise,where y_AY(_0)={_0-(_1n_1+_2n_2)/2}^T(_2n_2-_1n_1)-(_1n_1)/(2n_1)+(_2n_2)/(2n_2).Then, from Theorem 1 in <cit.>,under (A-ii), it holds that as d→∞(2/Δ)y_AY(_0)=(-1)^i+o_p(1)when _0∈π_i, i=1,2. When both d,N →∞, we have the following result.Under (A-i'), (A-ii') and (A-iv), it holds for the BC-SVM that e(i)→ 0 as d,N→∞ for i=1,2. § PERFORMANCES OF BIAS-CORRECTED SVM In this section, we check the performance of the BC-SVM both in numerical simulations and actual data analyses.§.§ SimulationsFirst, we checked the performance of the BC-SVM by using the toy examples in Figure 1.Similar to Section 1, we calculated the error rates, e(1), e(2) and e, by 2000 replications and plotted the results in Figure 2.We laid e(1), e(2) and e for the SVM by borrowing from Figure 1.As expected theoretically, we observe that the BC-SVM gives preferable performances even for (b) and (c) in which lim inf_d→∞|κ|/Δ>0.Next, we compared the performance of the BC-SVM with the SVM in complex settings.We set _1=,_1=( 0.3^|i-j|^1/3) and _2=( 0.4^|i-j|^1/3), where=[{0.5+1/(d+1)}^1/2,...,{0.5+d/(d+1)}^1/2].Note that (_1)=(_2)=d. We considered two cases:_2=(1,...,1,0,...,0,-1,...,-1)^T (=_α(t), ) whose first t/2 elements are 1 and last t/2 elements are -1 for a positive even number t; and _2=(t^1/2/2,t^1/2/2,0,...,0,-t^1/2/2,-t^1/2/2)^T (=_β(t), ) whosefirst two elements are t^1/2/2 and last two elements are -t^1/2/2 for a positive number t.Note that Δ=t both for _α(t) and _β(t). We generated _ij-_i, i=1,2; j=1,2,..., independently either from (I) N_d(,_i), i=1,2, or (II) a d-variate t-distribution, t_d(_i,10), i=1,2, with mean zero, covariance matrix _i and degrees of freedom 10.Note that (A-i) holds under (A-ii) for (I).Let d_*=2⌈ d^2/3/2 ⌉, where ⌈ x ⌉ denotes the smallest integer ≥ x.We considered four cases:(d) _2=_α(d_*), (n_1,n_2)=(5,25) and d=2^s, s=6,...,12, for (I);(e) _2=_α(d_*), d=1000 and (n_1,n_2)=(4s,8s), s=1,...,7, for (II);(f) d=1000, (n_1,n_2)=(10,20) and _2=_α(2^s), s=1,...,7, for (II); and(g) d=1000, (n_1,n_2)=(10,20) and _2=_β(2^s), s=1,...,7, for (II).Note that Δ=d_*=o(d) and (A-ii) holds for (d) and (e) from the fact that (_i^2)=O(d), i=1,2.Also, note that(A-i) holds for (d).However, (A-i) does not hold for (e) and (A-iii) does not hold both for (d) and (e).For (f) and (g), we note that Δ=2^s, s=1,...,7. Especially, (g) is a sparse case such that the only four elements of _1-_2 are nonzero.Similar to Section 1, we calculated the error rates, e(1), e(2) and e, by 2000 replications and plotted the results in Figure 3. We observe that the SVM gives quite bad performances for (d) in Figure 3.The main reason must be due to the bias term in the SVM.Note that κ/Δ→∞ as d→∞ for (d).Thus e(1) becomes close to 1 as d increases.See Corollary 1 for the details.Also, the SVM gives bad performances for (e) to (g) when n_is are small or Δ is small.This is because κ/Δ becomes large when n_is are small or Δ is small.On the other hand, from Figures 2 and 3, the BC-SVM gives adequate performanceseven when n_is and _is are unbalanced. The BC-SVM also gives a better performance than the SVM even when Δ is small (or sparse).§.§ Examples: Microarray data sets First, we used colon cancer data with 2000 (=d) genes given by <cit.> which consists of π_1: colon tumor (40 samples) and π_2: normal colon (22 samples). We set n_1=n_2=10.We randomly split the data sets from (π_1,π_2) into training data sets of sizes (n_1,n_2) and test data sets of sizes (40-n_1,22-n_2). We constructed the BC-SVM and the SVM by using the training data sets.We checked accuracy by using the test data set for each π_i and denoted the misclassification rates by e(1)_r and e(2)_r.We repeated this procedure 100 times and obtained e(1)_r and e(2)_r, r=1,...,100, both for the BC-SVM and the SVM.We had the average misclassification rates ase(1) (=∑_r=1^100e(1)_r/100)=0.16, e(2) (=∑_r=1^100e(2)_r/100)=0.166and e (={e(1)+e(2) }/2)=0.163 for the BC-SVM, and e(1)=0.158, e(2)=0.161 and e=0.159 for the SVM. By using all the samples, we considered estimating κ/Δ.We set m_1=40 and m_2=22.From Section 3.1 in <cit.>, an unbiased estimator of Δ was given by Δ̂_(m)=_1m_1-_2m_2^2-(_1m_1)/m_1-(_2m_2)/m_2.We estimated κ/Δ by κ/Δ={(_1m_1)/n_1-(_2m_2)/n_2}/Δ̂_(m)and had κ/Δ=0.003 for the 62 samples. In view of (<ref>), we expect that the BC-SVM is asymptotically equivalent to the SVM in such cases.We estimated ((_1)/Δ,(_2)/Δ) by((_1m_1)/Δ̂_(m),(_2m_2)/Δ̂_(m))=(3.99,3.959). It is difficult to estimate the standard deviation of the average misclassification rate.However, by noting that {e(i)}^1/2< {e(i)_r}^1/2=[e(i){1-e(i)}/(m_i-n_i)]^1/2,one may have an upper bound of the standard deviation for e(i) as s_u(i)=[e(i){1-e(i)}/(m_i-n_i)]^1/2,so that {∑_i=1^2s_u(i)^2/2 }^1/2 (=s_u, ) for e.For the BC-SVM, s_u(1)=0.067, s_u(2)=0.107 and s_u=0.089.We summarized the results for various n_is in Table 1.Next, we used leukemia data with 7129 (=d) genes given by <cit.> which consists of π_1: ALL (47 (=m_1) samples) and π_2: AML (25 (=m_2) samples).We applied the BC-SVM and the SVM to the leukemia data and summarized the results in Table 2.When n_1 ≠ n_2, |κ/Δ| becomes large since ((_1m_1)/Δ̂_(m),(_2m_2)/Δ̂_(m))=(2.693,2.785). As expected theoretically, we observe that the BC-SVM gives adequate performances compared to the SVM when |κ/Δ| is not small. Finally, we used myeloma data with 12625 (=d) genes given by <cit.> which consists of π_1: patients without bone lesions (36 (=m_1) samples) and π_2: patients with bone lesions (137 (=m_2) samples).We applied the BC-SVM and the SVM to the myeloma data and summarized the results in Table 3.When n_1 and n_2 are unbalanced, the SVM gives a very bad performance.This is because Δ in such cases is not sufficiently large since((_1)/Δ,(_2)/Δ)≈ ((_1m_1)/Δ̂_(m),(_2m_2)/Δ̂_(m))=(33.69,33.53), so that κ/Δ becomes too large when n_1 ≠ n_2.Especially when κ/Δ>1, e(1) of the SVM is too large.See Corollary 1 for the details.The BC-SVM also does not give a low error rate for this data because Δ is not sufficiently large.However, the BC-SVM gives adequate performances compared to the SVM especially when κ/Δ>1.Throughout Sections 3 and 4, we recommend to use the BC-SVM rather than the SVM for high-dimensional data.§ MULTICLASS SVMS In this section, we consider multiclass SVMs in HDLSS settings.We have i.i.d. observations, _i1,...,_in_i, from each π_i (i=1,...,g), where g ≥ 3 and π_i has a d-dimensional distribution with an unknown mean vector _i and unknown covariance matrix _i (≥).We assume n_i≥ 2, i=1,...,g.Let Δ_ij=_i-_j^2 for i,j=1,...,g; i≠ j. We assume that (_i)/d ∈ (0,∞) as d→∞ for i=1,...,g,and lim sup_d→∞Δ_ij/d<∞ for i,j=1,...,g; i≠ j.We consider the one-versus-one approach (the max-wins rule).See <cit.> and <cit.> for the details.Let N_g=∑_i=1^gn_i.First, we consider the case when d→∞ while N_g is fixed.We consider the following assumptions:(B-i)max_l=i,j(_lk-_l^2) /Δ_ij^2→ 0 as d→∞ for i,j=1,...,g; i≠ j;(B-ii)max_l=i,j(_l^2)/Δ_ij^2→ 0 as d→∞ for i,j=1,...,g; i≠ j.Let κ_ij =(_i)/n_i-(_j)/n_j for i,j=1,...,g; i≠ j.We consider the following condition:(B-iii)lim sup_d→∞|κ_ij|/Δ_ij<1for i,j=1,...,g; i≠ j.From Theorem 1, for the one-versus-one approach by (<ref>), we have the following result.Under (B-i) to (B-iii), it holds for the multiclass SVM that e(i)→ 0 .From Theorem 2, for the one-versus-one approach by (<ref>), we have the following result.Under (B-i) and (B-ii), the multiclass BC-SVM holds (<ref>).Note that the BC-SVM satisfies the consistency property without (B-iii).Thus we recommend to use the BC-SVM in multiclass HDLSS settings. Next, we consider the case when both d,N_g→∞ while N_g/d→ 0.Similar to Section 2.3 and Corollary 3,the multiclass SVMs have the consistency propertyunder some regularity conditions.We checked the performance of the multiclass SVMs by using leukemia data with 12582 (=d) genes given by<cit.> which consists of π_1: ALL (24 (=m_1) samples),π_2: MLL (20 (=m_2) samples) andπ_3: AML (28 (=m_3) samples).We applied the multiclass BC-SVM and SVM to the leukemia and summarized the results in Table 4. We had ((_1m_1)/Δ̂_12(m), (_2m_2)/Δ̂_12(m))=(2.724, 3.213),((_1m_1)/Δ̂_13(m),(_3m_3)/Δ̂_13(m))=(0.738,0.9)and ((_2m_2)/Δ̂_23(m),(_3m_3)/Δ̂_23(m))=(1.533,1.585),where Δ̂_ij(m)=_im_i-_jm_j^2-(_im_i)/m_i-(_jm_j)/m_j that is an unbiased estimator of Δ_ij.Thus |κ_ij/Δ_ij| must become large when n_i≠ n_j.Actually, the multiclass BC-SVM gives adequate performances for all the cases. §Throughout, let =_1-_2 and _*=(_1+_2)/2.Under (A-ii), we have that as d→∞^T_i/Δ^2 ≤(_i^2)^1/2/Δ=o(1), i=1,2.Then, by using Chebyshev's inequality, for any τ>0, under (A-ii), we have that P(|(_j-_*)^T(_k-_*)-Δ/4 |≥τΔ )≤ (τΔ)^-2E[{(_j-_*)^T(_k-_*)-Δ/4 }^2] =O{(_1^2)+^T_1}/Δ^2=o(1); P(|(_j-_*)^T(_k-_*)-Δ/4 |≥τΔ ) =O{(_2^2)+^T_2}/Δ^2=o(1);P(|(_j-_*)^T(_k-_*)+Δ/4 |≥τΔ )=O{(_1_2)+^T(_1+_2)}/Δ^2=o(1)from the fact that (_1_2)≤{(_1^2)(_2^2)}^1/2. From (<ref>), for any τ>0, we have that P(| _j-_*^2-Δ/4-(_1) |≥τΔ )=O{(_1j-_1^2) +^T_1}/Δ^2=o(1) ;andP(|_j-_*^2-Δ/4-(_2) |≥τΔ )=o(1)under (A-i) and (A-ii). Here, subject to (<ref>), we can write for (<ref>) thatL()=∑_j=1^Nα_j-1/2∑_j=1^N∑_k=1^Nα_jα_kt_jt_k (_j-_*)^T(_k-_*).Then, by noting that α_j≥ 0 for all j subject to (<ref>), from (<ref>) and (<ref>), we have that L()= ∑_j=1^N α_j- Δ/8(∑_j=1^N α_j)^2- 1/2((_1)∑_j=1^n_1α_j^2+ (_2)∑_j=n_1+1^Nα_j^2)+o_p{Δ(∑_j=1^N α_j)^2}subject to (<ref>) under (A-i) and (A-ii). It concludes the result.By combining Lemma 1 with (<ref>) and (<ref>), we can claim the first result.When Ŝ={1,...,N}, by noting that ∑_j=1^Nα̂_jt_j=0, we have that ŷ(_0)= ∑_j=1^N α̂_jt_j (_j-_*)^T(_0-_*)+∑_j=1^N α̂_jt_j (_j-_*)^T_*+b̂= ∑_j=1^N α̂_jt_j (_j-_*)^T(_0-_*)+ -n_1+n_2/N-1/N∑_j=1^N∑_k=1^N α̂_kt_k(_j-_*)^T(_k-_*).From the first result of Lemma 2, (<ref>) and (<ref>), we have that as d→∞-n_1+n_2/N-1/N∑_j=1^N∑_k=1^N α̂_kt_k(_j-_*)^T(_k-_*)=-n_1+n_2/N+(n_1-n_2)Δ/Δ_*N+2 (_1)-(_2)/Δ_*N+o_p( Δ/Δ_*) =-n_1+n_2/N(δ/Δ_*)+2 (_1)-(_2)/Δ_*N+o_p( Δ/Δ_*)= (_1)/n_1-(_2)/n_2/Δ_*+o_p ( Δ/Δ_*)under (A-i) and (A-ii).Similar to (<ref>), under (A-ii), we obtain that (_j-_*)^T(_0-_*)/Δ=(-1)^i+1/4+o_p(1) for j=1,...,n_1, and (_j-_*)^T(_0-_*)/Δ=(-1)^i/4+o_p(1) for j=n_1+1,...,N, when _0 ∈π_i (i=1,2).Then, from the first result of Lemma 2, under (A-i) and (A-ii),it holds that ∑_j=1^Nα̂_jt_j (_j-_*)^T(_0-_*)=(-1)^iΔ/Δ_* +o_p( Δ/Δ_*)when _0 ∈π_i for i=1,2.By combining (<ref>) with (<ref>) and (<ref>),we can conclude the second result.By using (<ref>), the results are obtained straightforwardly.Similar to (<ref>), under (A-ii'), from (<ref>), we have that as d,N→∞∑_1≤ j< k≤ n_1 P(|(_j-_1)^T(_k-_1)|≥τΔ )=O(n_1^2(_1^2)/Δ^2)=o(1);∑_n_1+1 ≤ j< k≤ N P(|(_j-_2)^T(_k-_2) |≥τΔ )=O(n_2^2(_2^2)/Δ^2)=o(1);∑_j=1^n_1∑_k=n_1+1^N P(|(_j-_1)^T(_k-_2) |≥τΔ )=O(n_1n_2(_1_2)/Δ^2)=o(1);∑_j=1^n_1 P(|(_j-_1)^T |≥τΔ )=O(n_1^T_1/Δ^2)= O(n_1(_1^2)^1/2/Δ)=o(1); ∑_j=n_1+1^N P(|(_j-_2)^T |≥τΔ )=O(n_2(_2^2)^1/2/Δ) =o(1)for any τ>0.Then, under (A-ii'), we have that (_j-_*)^T(_k-_*)=Δ{1+o_p(1)}/4; (_j-_*)^T(_k-_*)=Δ{1+o_p(1)}/4;(_j-_*)^T(_k-_*)=-Δ{1+o_p(1)}/4.On the other hand, for any τ>0, we have that∑_j=1^n_1 P(| _j-_*^2-Δ/4-(_1) |≥τΔ )= O{n_1 (_1j-_1^2) +n_1 ^T_1}/Δ^2=o(1) and∑_j=n_1+1^N P(|_j-_*^2-Δ/4-(_2) |≥τΔ )=o(1)under (A-i') and (A-ii') as d,N→∞,so that _j-_*^2=Δ{1+o_p(1)}/4+(_1) ;_j-_*^2=Δ{1+o_p(1)}/4+(_2) .Then, by combining (<ref>) with (<ref>) and (<ref>), we have (<ref>) as d,N→∞,subject to (<ref>) under (A-i') and (A-ii').Similar to the proof of Lemma 2,by noting (A-iv), we can conclude the result.We have that Δ̂_*-Δ_*= ∑_i=1^2∑_j=1^n_i_ij-_i^2-(_i) /n_i^2+ ∑_i=1^2∑_j≠ k^n_i(_ij-_i)^T(_ik-_i)/n_i^2+∑_i=1^2(-1)^i+1^T(_in_i-_i)-2(_1n_1-_1)^T (_2n_2-_2).Note that E[{_ij-_i^2-(_i)}^2]=o(Δ^2) as d→∞ under (A-i) for all i,j.Also,note that E[{^T(_in_i-_i)}^2]=^T_i/n_i≤Δ(_i^2)^1/2/ n_i=o(Δ^2/n_i) as d→∞ under (A-ii) for i=1,2.Then, from (<ref>), we can claim that E{(Δ̂_*-Δ_*)^2}=o(Δ^2) under (A-i) and (A-ii),so that Δ̂_*=Δ_*+o_p(Δ).On the other hand, we have that (_in_i)-(_i)=∑_j=1^n_i_ij-_i^2-(_i) /n_i -∑_j≠ k^n_i(_ij-_i)^T(_ik-_i)/n_i(n_i-1).Then, similar to Δ̂_*, we can claim that (_in_i)=(_i)+o_p(Δ) for i=1,2, under (A-i) and (A-ii), so that κ̂=κ+o_p(Δ).Hence, by noting that |κ|/Δ_*≤ 1, we can claim the result. By using (<ref>), the result is obtained straightforwardly. From Lemma 3, we have (<ref>) as d,N→∞ under (A-i'), (A-ii') and (A-iv).We note that Lemma 4 holds even when d,N→∞.Hence, from (<ref>) and Lemma 4, we can claim the results.By using Theorems 1 and 2, the results are obtained straightforwardly.§ ACKNOWLEDGEMENTSResearch of the second author was partially supported byGrant-in-Aid for Young Scientists (B), Japan Society for the Promotion of Science (JSPS), under Contract Number 26800078. Research of the third author was partially supported byGrants-in-Aid for Scientific Research (A) and Challenging Exploratory Research, JSPS, under Contract Numbers 15H01678 and 26540010.99[Ahn and Marron (2010)]AM10Ahn, J., Marron, J.S., 2010.The maximal data piling direction for discrimination.Biometrika 97, 254-259.[Alon et al. (1999)]A99 Alon, U., Barkai, N., Notterman, D.A., Gish, K., Ybarra, S., Mack, D., Levine, A.J., 1999. Broad patterns of gene expression revealed by clustering analysis of tumorand normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci. USA 96, 6745-6750. [Aoshima and Yata (2011)]AY11 Aoshima, M., Yata, K., 2011. Two-stage procedures for high-dimensional data.Sequential Anal. (Editor's special invited paper) 30, 356-399. [Aoshima and Yata (2014)]AY14Aoshima, M., Yata, K., 2014. A distance-based, misclassification rate adjusted classifier for multiclass, high-dimensional data.Ann. Inst. Statist. Math. 66, 983-1010. [Aoshima and Yata (2015a)]AY15a Aoshima, M., Yata, K., 2015a. Geometric classifier for multiclass, high-dimensional data.Sequential Anal. 34, 279-294.[Aoshima and Yata (2015b)]AY15bAoshima, M., Yata, K., 2015b. High-dimensional quadratic classifiers in non-sparse settings. arXiv:1503.04549.[Armstrong et al. (2002)]A02 Armstrong, S.A., Staunton, J.E., Silverman, L.B., Pieters, R., den Boer, M.L., Minden, M.D., Sallan, S.E.,Lander, E.S., Golub, T.R., Korsmeyer, S.J., 2002. MLL translocations specify a distinct gene expression profile that distinguishes a unique leukemia.Nature Genetics 30, 41-47.[Bishop (2006)]B06Bishop, C.M., 2006. Pattern Recognition and Machine Learning.Springer, New York. [Chan and Hall (2009)]CH09 Chan, Y.-B., Hall, P., 2009.Scale adjustments for classifiers in high-dimensional, low sample size settings. Biometrika 96, 469-478. [Friedman (1996)]F96 Friedman, J., 1996. Another approach to polychotomous classification.Technical report, Stanford University. [Golub et al. (1999)]G99 Golub, T.R., Slonim, D.K., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov, J.P., Coller, H., Loh, M.L.,Downing, J.R., Caligiuri, M.A., Bloomfield, C.D., Lander, E.S., 1999.Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286, 531-537.[Hall et al. (2005)]H05 Hall, P., Marron, J.S., Neeman, A., 2005. Geometric representation of high dimension, low sample size data. J. R. Statist. Soc. B 67, 427-444.[Hall et al. (2008)]H08 Hall, P., Pittelkow, Y., Ghosh, M., 2008. Theoretical measures of relative performance of classifiers for high dimensional data with small sample sizes.J. R. Statist. Soc. B 70, 159-173.[Hastie et al. (2009)]H09 Hastie, T., Tibshirani, R., Friedman, J., 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction (second ed.). Springer, New York. [Marron et al. (2007)]M07 Marron, J.S., Todd, M.J., Ahn, J., 2007. Distance-weighted discrimination.J. Amer. Statist. Assoc. 102, 1267-1271.[Qiao et al. (2010)]Q10 Qiao, X., Zhang, H.H., Liu, Y., Todd, M.J., Marron, J.S., 2010.Weighted distance weighted discrimination and its asymptotic properties. J. Amer. Statist. Assoc. 105, 401-414.[Qiao and Zhang (2015)]QZ15 Qiao, X., Zhang, L., 2015. Flexible high-dimensional classification machines and their asymptotic properties. J. Mach. Learn. Res. 16, 1547-1572.[Schölkopf and Smola (2002)]SS02 Schölkopf, B., Smola, A.J., 2002.Learning with Kernels.MIT Press, Cambridge.[Tian et al.(2003)]T03 Tian, E., Zhan, F., Walker, R., Rasmussen, E., Ma, Y., Barlogie, B., Shaughnessy, J.D. Jr., 2003.The role of the Wnt-signaling antagonist DKK1 in the development of osteolytic lesions in multiple myeloma.N. Engl. J. Med. 349, 2483-2494.[Vapnik (2000)]V00 Vapnik, V.N., 2000. The Nature of Statistical Learning Theory (second ed.). Springer, New York.
http://arxiv.org/abs/1702.08019v1
{ "authors": [ "Yugo Nakayama", "Kazuyoshi Yata", "Makoto Aoshima" ], "categories": [ "stat.ML", "cs.LG", "62H30, 62G20" ], "primary_category": "stat.ML", "published": "20170226103839", "title": "Support vector machine and its bias correction in high-dimension, low-sample-size settings" }
1Achievement and Friends: Key Factors of Player Retention Vary Across Player Levels in Online Multiplayer Games Kunwoo Park^*    Meeyoung Cha^**    Haewoon Kwak^†    Kuan-Ta Chen^^*Graduate School of Web Science Technology, School of Computing, KAIST, South Korea ^**Graduate School of Culture Technology, KAIST, South Korea ^†Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar ^Academia Sinica, Taiwan {kw.park,meeyoungcha}@kaist.ac.kr   haewoon@acm.org   ktchen@iis.sinica.edu.tw ============================================================================================================================================================================================================================================================================================================================================================================================================================================== Retaining players over an extended period of time is a long-standing challenge in game industry. Significant effort has been paid to understanding what motivates players enjoy games. While individuals may have varying reasons to play or abandon a game at different stages within the game, previous studies have looked at the retention problem from a snapshot view. This study, by analyzing in-game logs of 51,104 distinct individuals in an online multiplayer game, uniquely offers a multifaceted view of the retention problem over the players' virtual life phases. We find that key indicators of longevity change with the game level. Achievement features are important for players at the initial to the advanced phases, yet social features become the most predictive of longevity once players reach the highest level offered by the game. These findings have theoretical and practical implications for designing online games that are adaptive to meeting the players' needs. § INTRODUCTIONPlayer retention is a critical and long-running quest in online game industry. What makes players stay happy in a game and follow through its scenario? What makes them continue the game even after having reached the highest level offered? To answer these questions, researchers have studied themotivations of game players for over a decade <cit.>. Studies based on theoretical investigations, user surveys, and log data analyses have identified several factors that are critical to retention. For example, players are known to find enjoyment in games from completing missions, empowering through growth and level ups, forming communities, competing against other players, discovering plots and characters, and more.Previous studies have tried to group these motivating factors and measure their relative strengths in retaining players. Researchers have found that players can be grouped into a few set of clusters based on their game motivations as action-social (i.e., players who enjoy fast-paced scenario with player interaction), mastery-achievement (i.e., players who indicate interests in narrative, expression, and world exploration), and immersion-creativity (i.e., players who appeal to strategic game plays, taking on challenges, and becoming powerful). Game designers carefully implementreward mechanisms of each motivation type throughout game scenarios to meet the needs of different players. Existing work has assumed that the relationship between players and motivations is rigid (e.g., does not change over time) and is irrespective of the players' virtual life phases. This study brings a multifaceted aspect to this important question by examining retention over various phases of individual lifetime. We assume that one's potential and capacity to enjoy a game changes over time, and hence the need and the ability to achieve higher levels quickly and to socialize within games for cooperative shifts must be different for each individual. By observing in-game behavior logs throughout various phases of real game players, this paper sets out to answer the following research questions: * For each phase within an online multiplayer game, what are the characteristics of players who achieve the next higher levels and get retained?* Why do some individuals continue to play even after having reached the max level? We utilize logs gathered from one of the oldest massively multiplayer online role-playing games (MMORPGs) in the world, Fairyland Online in Taiwan. We gained access to the complete set of actions of 51,104 individuals, describing their achievement logs (quests and level ups), financial logs (gaining wealth), as well as social logs (chats among players). Myriads of action logs on tens of thousands of individuals who ultimately achieved different levels and played the game for different amounts of time allow us to design a natural experiment on the lifetime retention problem. We identify the factors attributing to game longevity through the detailed log analysis and make the following observations:* Achievement features are important for players during the initial to advanced phases; players who are achievement-oriented and gather large amounts of rare items and virtual money are more likely to be retained and succeed in achieving the next levels.* Achievement-related traits, however, are no longer as important for players who reach the max level. Social features become the most predictive of success and longevity beyond this point. * Having strong social relationships (measured by the number of friends) is a good indicator of player retention and their effect continues to show significance through virtual life phases of players. Our findings bring theoretical and practical implications for studying and designing online games. The finding that a player's needs vary over one's virtual life trajectory needs to be carefully addressed by further research and game designs. In particular, findings on longevity of the max level players is new. This finding is particularly important as their behaviors have not been studied much, even though expert players are valuable to the user ecosystem. § RELATED WORKS Since Bartle <cit.> defined the four-type player taxonomy based on motivation in text-based games, there have been numerous efforts to understand why people join and continue to play online games. Among them is Yee's findings on three motivational components—achievement, social, andimmersion—based on factor analysis of survey results on Bartle's player types. This study also identified that motivations can vary across different demographics. While general MMO players are found to be achievement-oriented <cit.>, females were more likely to play online games to have social relationship with other players. The reason that players' motivation has been studied for decades is partly because of its ultimate connection to player retention. Among recent findings, Debeauvais et al. <cit.> asked World of Warcraft players about their motivation for play and game usage patterns through questionnaire surveys and found that socially-motivated players are more likely to discontinue games while achievement-oriented players tend to continue. Borbora et al. <cit.> built a prediction model of playermotivation from log data. From data mining experiments using player activity logs from Everquest II, they found achievement is a dominant motivation for predicting player churn (i.e., opposite of player retention). Above studies consistently report that achievement is a major motivation for retention in online games. On the other hand, some studies found social activity to be more important for retention. Based on the log data of EverQuest II, a study showed social influences from peers help predict player retention better <cit.>. Most recently, another group of researchers observed that game interactions such as interacting with toxic players can have negative impacts on retention in League of Legends <cit.>. As cyberbullying has been considered as one of the factors that make players annoyed, feel fatigued, and even leave the game <cit.>, there have been much efforts to define, detect, and prevent toxic playing in online games <cit.>.However, in this work, we do not investigate the effect of cyberbullying on player engagement due to the limitation of our dataset.While many studies have put efforts to contribute to understanding player retention, much of the findings have been drawn from a snapshot view—players aggregated by demographic features yet not considering how they grow over time within a game. Like human life itself, players face different challenges and engage in specific actions depending on their levels. For example, Ducheneaut et al. <cit.> observed that online game players are more likely to play alone in an early stage, but become socially active at higher level. Players need to collaborate with one another to defeat strong monsters or complete difficult quests as their level elevates. Moreover, players enjoy an entirely different in-game experiences once they achieve the maximum level, as they become socially active without consuming much game content <cit.>. This means that factors leading to higher levels or being retained may be different across the entire player lifetime within online games. However, little attention has been paid to characteristics of churners over different phases of players. To the best of our knowledge, only one study by Shores et al. <cit.> investigated how indicators of player retention compare between new-joiners and experts in a MOBA (multiplayer online battle arena) game, and there is room for improvement. First, churn types can be examined for more than two groups. Player behaviors continuously change with level, and hence it is more natural to observe the whole picture of player life trajectories. Second, comprehensive data provide richer views. While the study relied on add-ons to gather data, the kinds of data that could be gathered externally is limited. Utilizing in-game logs provide full picture of player behaviors that might be important for predicting retention. Third, the studied game is a specific type that does not capture the growth of players naturally. MOBA game is repeated matches of the ground which have importance on team formations <cit.>, yet MMORPG allows characters to explore and grow as individuals. § DATASET Fairyland Online is one of the longest serviced MMORPGs, which has been played in Taiwan and other nearby countries since its launch in 2003. As depicted in Figure <ref>, the game is set on a virtual world that sets on fairy tales. Players can create their own avatars by choosing a race among human, elf, and dwarf and a gender of either female or male. On the virtual world, players explore their kingdoms, complete quests by fighting with monsters, and form social relationships with one another. Every action in the game is recorded in the game servers with accurate timestamps. Thanks to the Larger Network Technologies that serviced Fairyland Online, we gained access to the log data describing all actions that have been performed in the game. Fairyland Online servers log three different types of datasets. Firstly, there are logs related to achievement experience points (e.g., learn skills, completing quests). When a player gains enough experience points, his or her level will increase.Secondly, there are a group of actions related to gaining or losing wealth on the virtual world (e.g., buy or sell items, earn or use money). Game items can be retrieved by defeating monsters or by purchasing with virtual money. The last logs are about chats among players. There are four different channels to chat: Say, Whisper, Family, and Party. Say is a public channel through which a player can communicate with multiple companions. Messages on Say channel are broadcasted. Thus, if someone writes a message through Say channel, whoever in a virtual proximity can see it. Whisper is a private channel between two players. Because Whisper channel is private, no one except for the speakers and receivers can overhear those messages. Family is a dedicated channel for players who belong to the same `family', which is equivalent to what is called`guild' in other MMORPGs <cit.>. Party is the mode of communication for short-term groups. For privacy concerns, the chat contents themselves were encrypted.The three kinds of datasets we received covered different time spans. We consolidated them to find a common overlapping period, over which we gain a full view of the achievements, financial, and social activities within the game. The overlapped portion covered nearly 60 million activity instances logged for 51,104 game players. We refer to this final complete dataset asin this paper and describe its statistics in Table <ref>. The table also shows the unique number of game players and record instances logged over the three original datasets. Given the timestamps of actions, we may infer whenplayers accessed the game throughout a day and a week. The 24-hour plot shown in Figure <ref>(a) depicts that, for both the entire logs and for theperiod, players show strong diurnal patterns. The game was played more during late night with a peak between 8 pm and 10 pm. A disproportionately fewer players logged on during early morning times. In the morning, the highest time is around 11 am (before lunch time), then an increasing number of players join in the afternoon and evening hours. Differences between the entire log andis marginal, indicating that the finaldata set we study is representative of the entire log in terms of temporal patterns. Figure <ref>(b) presents the normalized daily access pattern across the data, again for the entire log and for . We find the game is played 1.4–1.5 times more frequently on weekends than during the weekdays. Later, we will investigate such detailed temporal features of players (e.g., weekend vs weekday oriented, most active time of a day) in predicting player retention. Note that the temporal patterns seen here have also appeared in other game studies <cit.>, which suggests that the studied Fairyland Online shares commonalities with other representative MMORPGs. § METHODOLOGY§.§ Phase Definition The key objective of this research is to know what keeps a player in a game over various phases (and in particular toward the very end) of his or her virtual life. To answer this question, we start with arbitrary grouping of one's lifetime. In this work, we do not consider the very first phases of a game (i.e., beginners), which is a specific subset of our problem. We focus on players who have spent enough time to be accustomed to the rules of the game and determine what factors positively or negatively contributed to reaching the next level. Players in Fairyland Online can have a level between the lowest of 1 and the highest of 50. Among such players, we decide target users to represent each phase of the online game by one quantity: the observable level ranges for each player during(i.e., level of 10–15, 20–25, 30–35, 40–45, and 45–50). Players belonging to the 10–15 level group must have achieved a level of at least 15 and have their traces since level 10 visible in theperiod.Table <ref> describes the five representative phases of virtual life that we examine in this paper. Where the exact division among groups lies is less of importance in this work. Rather, we are more interested in finding trends that become more prominent among the advanced, long-term game players. For each phase, we define success differently. The first four phases of 1–4 in Table <ref> allow us to examine whether each player successfully achieves the next five levels. For phase 1 (i.e., level 10–15), we consider players who ultimately reach level 20 as success and otherwise as unsuccessful. For phase 5, the success is defined as whether the game is able to keep a given player or not. We decided a player is churned when he or she becomes inactive for consecutive 90 days over the next 270 days. To test the validity of the length of days to decide user churn, we varied the number of days from 30 to 180 and they showed similar results with Table <ref>, which will be presented in the next section.Out of the 51,104 individuals in thedataset, we only consider players whose observable levels are within the given ranges as described by Phase 1–5 as target players. We also ensure that each player has at least 31 days observable within inafter the end of the observation level. For instance, for Phase 1, we ensure that individuals had at least 31 days after they reached level 15 in the log. This gives ample time for them to meet the success criteria (i.e., achieving level 20 for Phase 1). The buffer time of 31 days was determined from the log analysis. We investigated how long it takes to achieve the higher level for each phase listed in the table for a subset of 91 players who joined and achieved the maximum level during thedata period.These players took 13.92 days on average with the standard deviation of 1.29 and a maximum of 30.97 days to achieve 5 level ups (i.e., from level 45 to level 50). Thus, we set the buffer length as 31 days. For Phase 5 players, however, we did not enforce this buffer length, as they no longer need to achieve a higher level. For these players, churn was measured over the entire log period (beyond theperiod). From Phase 1 to Phase 5, we identified 3818, 1739, 1370, 674, and 221 players meeting the above criteria, respectively. Note that a player can belong to multiple groups so long as he or she meets the criteria.The last column of Table <ref> displays the probability of success for each phase, where the success criterion is also listed in the table itself. The fraction of players who success in a given phase is the highest among Phase 3 players (i.e., individuals who were in level 30–35) and is the lowest for Phase 5 players (i.e., individuals who reach the highest level offered). Nonetheless, the success probability or the retention rate is considerably stable, remaining over 0.4 throughout the five phases.§.§ Studied Features We utilize a total of 16 features for predicting user retention. The features are divided into three major categories based on their characteristics: temporal, achievement-related, and social. §.§.§ Temporal Category The temporal features describe when individuals played the game. Temporal patterns not only reveal how often a user plays the game (e.g., every day vs. once a week) but also reveal certain demographic traits. For example, play time can be used to infer which players are likely students (e.g., peak immediately after the school hours) or which players likely work regular hours (e.g., playtime starts only after the typical business hours). We extract two features like below:* Frequent hours (morning, working_hour, evening, and night_owl): To capture playing patterns, we measured how frequently a player accesses the game with the following time blocks. We define 4 variables that represent specific playing patterns: morning from 6 am to 9 am, working hours from 9 am to 6 pm, evening from 6 pm to midnight, and night owl from midnight to 6 am. We conducted vector normalization on those variables to remove the effects of the number of engaged days. * Weekday vs weekends (weekends): the fraction of playtime contributed from weekends. §.§.§ Achievement-Related Category Many studies have reported the importance of achievement as a goal in player retention <cit.>. To measure its effect, we utilize the following features related to in-game achievements.* Possessed item count (item): the total count of owned items measured by subtracting the number of item-losing logs from item-gaining logs. This quantity is a proxy of in-game achievements.* Rare items on hand (rare_item): Owning rare items can be more important achievement than general item counts. To decide which items are rare, we approximated chances of getting an item by counting the frequency over the whole item frequencies measured from . Then, we considered items appeared with the probability lower than 0.01 to be rare items. We measured the number of rare items on hand in the same manner as we did to count for items on hand.* Amount of money on hand (money): the amount of virtual money that each player has, calculated by the differences between money-gaining logs and money-losing logs. * Level of difficulty (difficulty): the level of difficulty, measured by a combination of the number of deaths andbroken items. The appropriate level of difficulty has been considered as an important element for user engagement in online games <cit.>.* Performance (performance): The performance of achieving level can represent level of motivations on achievement. We measured it by changing the sign of time length of observation period, because each user can take different time to achieve 5 levels of the observation period based on his or her performance. A larger performance value hence indicates that a player leveled up quickly. §.§.§ Social Category Social features are another important group of indicators for player engagement <cit.>. Below we describe the list of social features we tested in this paper.* Number of social interactions (num_social): the number of all messages that a player sent through any channel—a measure of social activeness.* Response rate (response_rate): the probability of giving responses when a player received messages from an unknown player—a measure of social openness. * Number of friends (friends): To figure out effects of social interactions in a more detail, we define friendships. Based on whisper logs, we counted the number of distinct days paired communications take place. If a player has paired communications with another player for at least three different days, we considered the communication partner to be a friend. To measure this variable, we counted the number of friends who communicated at the observation period as a feature for retention.* Number of non-friends (nonfriends): We considered those who have paired communications but are not friends to be non-friends. The number of non-friends were observed from the observation period of each phase.* Friends' level (friends_level): To represent the level of friends, we got the median level of friends who have communicated on the observation period. * Non-friends' level (nonfriends_level): the median level of non-friends, who have communicated with the player during the observation period.* Number of max-level friends (friends_maxlevel): the number of friends who communicated with the player and already achieved the maximum level at the moment of communication on the observation period.* Number of max-level non-friends (nonfriends_maxlevel): the number of non-friends who have paired communication with the player and already achieved the maximum level when the communication happens.* Is a member of a family (has_family): a binary variable whether the player belongs to a family, which is a membership-based group. We inferred it based on whether a user has sent messages through Family channel. §.§ Player Retention Model A logistic regression model was used to determine factors that affect player longevity. The regression model helps us investigate how various indicators attribute to player retention across different life phases within the virtual world, while allowing us to control for the effects of other variables. Hence we choose to use interpretable models in this paper rather than implementing other kinds of prediction models that might achieve higher performance. Prior to analyses, a step was taken to balance the data. Because the success rate at each phase is biased toward one side, we employed an over-sampling technique to prepare an equal number of success and fail cases for each phase. All variables were scaled to have a mean of 0 and a standard deviation of 1. In addition, since variables of regression models can turn out to be significant simply due to a large number of predictors, we performed variable selections using Lasso <cit.> by choosing the lambda whose cross-validated mean squared error is within one standard error of the minimum. In the results section, we report findings of the regression fitting after this feature selection step.§ RESULTS This research assumes the important indicators of the player retention vary throughout the different phases in Fairyland Online. To test this idea, for each phase of the game, we fit the logistic regression model of the successful cases and unsuccessful cases (as defined in Table <ref>) across the 16 features from three categories (i.e., temporal, achievement, and social). We compare the relative importance of each category in predicting player retention.§.§ Low- to Medium-level Patterns Among the five phases of the game level, here we focus on Phase 1 to Phase 3, which are logs of players who havebecome accustomed to the game. Our goal is to understand what kinds of players are more likely to be retained and further succeed in achieving the next levels. The three phases were observed among more than a thousand individuals. Below we only list the final set of features deemed as meaningful (out of the 16 features) for each of the three phases, after the Lasso variable selection step. Table <ref> presents the fitted results for Phase 1, which shows the estimates and significance of variables. The table also shows the model χ^2 value based on the likelihood ratio test with a null model. We see that at least one feature from the temporal, achievement, and social categories appears as significant. Among the temporal features, night_owl is positively associated with the success, while weekends is negatively correlated. This means individuals who mainly played after midnight and during weekdays (but not just on weekends) were more likely to reach the next levels—This could indicate that at an early stage, time dedication is an important marker of success. Among the achievement features, performance (i.e., negative quantity of the playtime) increases the probability of success in that players with speedy game style are more likely to be retained and achieve the next levels. As many studies found, achievement is one of the main motivations for continuing to play online games <cit.>. Our analysis also confirms that in an early virtual stage achievement help players reach the next levels without leaving games. Among the social features, friends is positively associated with success, while nonfriends is negatively associated with success. This may indicate that players with many friends yet fewer weak social ties are more likely to achieve the next levels. This trend supports findings from several studies on the importance of social interactions in games <cit.>. In contrast, the negative effect of nonfriends is interesting. It may indicate that communication with too many random users may be harmful for long-term engagement. Furthermore, nonfriends_maxlevel shows negative association with the success rate in that players who communicate with many max-level non-friends are less likely to continue with the game. Communicating with too advanced non-friends may be a negative experience on future engagement, because players can feel left behind <cit.>.Phase 2 players indicated several more number of significant variables related to retention, as shown in Table <ref>. Among the temporal features, night_owl is positively associated with success, while morning is not. It seems still important to devote extra time on the game after midnight for achieving higher levels, while playing the game since early morning (i.e., 6–9 am) seem a not effective strategy for further engagement. Among the achievement features, performance is again positively associated. We newly found money to be positively correlated, yet item is negatively associated. Accumulating in-game money increases the probability of the continued usage at Phase 2, because it may become difficult to quit the game after one gathers a large sum of virtual money. In addition, virtual wealth means one's ability to upgrade game avatars, which helps achieve the next levels easier. Those two possible explanations can be linked to the success of achieving more levels. However, simply owning many items decrease the chance of success. Among social features, friends is again positively associated with the success, while nonfriends is not. This finding implies that having more friends and fewer weak social relationship is linked to helping players achieve higher levels. Also, has_family is newly found to be significant with a negative estimate at this stage. In other words, players are less likely to succeed when they joined a family. This trend also supports the importance of focusing on strong social relationship to the continued usage of online games. Lastly, we observed that friends_maxlevel, and friends_level are negatively associated with success. This trend can be similarly explained with the negative association of nonfriends_maxlevel for prediction of Phase 1.Players in Phase 3 exhibit similar trends as in Phase 2 (Table <ref>). Among temporal features, night_owl is positively correlated, yetweekends is not—This pattern is similar to what we have seen before in earlier phases. Among the achievement features, performance is again important for predicting player retention. There are some new trends; while item is still negatively associated, players who gather larger amounts of rare items (i.e., rare_item) are likely to succeed. With the positive association of money, this finding supports the claim that owning virtual wealth is related to the success of achieving higher level. In addition, difficulty (i.e., the number of deaths and broken items) was found to be a positive indicator for success. Once the game reaches a certain stage, an appropriate level of difficulty may help players better enjoy games, as reported in a previous study <cit.>. Next, from social features, friends is positively correlated, whilenonfriends and num_social are not. Again this finding demonstrates the importance of having communication with close friends rather than simply being socially active. Lastly, among variables on whom users talked to, nonfriends_level was newly found as a positive estimator. We hypothesize that players who ask for more help to other players of higher level are more likely to succeed. As reported in previous works <cit.>, communicating with experts can sometimes be helpful in online games because they share knowledge, useful tactics, and strategies that are critical in proceeding with the next phases. Because this process does not require having any strong relationships with those with high level, nonfriends_level is a positive indicator yet friends_level may remain to have the opposite effect. As discussed in the results seen in earlier phases, having social relationships with users with max level or higher may give detrimental effects on future engagement. In summary, we found two consistent trends from the regression analysis of low- to medium-level phases (i.e., Phase 1-3). One is that performance on achieving levels and playing patterns related to devoting more times increase the probability of success for achieving more levels. These findings can be connected to the importance of motivation on achievement for player retention. Another is that players who have more friends yet fewer weak social relationships were more likely to be engaged in Fairyland Online continuously. Playing games with friends may give positive effects on achieving more levels. These findings are consistent with previous findings on player retention on other games <cit.>. §.§ High-level Patterns Players who reached a level 40 or above out of 50 in Fairyland Online may be considered advanced users. What are the factors that lead to successfully reaching the endgame for these advanced players? Table <ref> displays the regression result for Phase 4 players. At this very last stage, the only meaningful feature left after the Lasso feature selection isperformance (i.e., -1×play time). Players who enjoy speedy game and are quick at leveling up are more likely to succeed to reach the max level. It is interesting to see that achievement-related feature alone is a critical factor of success. Once players reach the max level, a different story unfolds. In contrast to the Phase 4 players, the only meaningful feature of longevity left at this stage is the social category, where friends is the only significant indicator for retention for players who reach the highest level. This finding suggests that having a substantial number of friends is consistently important in determining who will continue to play online games even after completing all missions. Note that this variable was also important during the earlier phases, further suggesting the importance of social interactions for player retention from earlier stages to the endgame. As found in previous studies <cit.>, online games become more of a social space after the max level. To be engaged in such online games in the long run, players must have constructed strong social relationships from early on in their virtual lives. §.§ Trajectory Over Lifetime Having examined the factors of player retention step-wise, we now jointly view trends over the entire life stages in the Fairyland Online game. The set of features examined are from three main categories: temporal, achievement-related, and social. Which of these categories are important for predicting player retention at each phase? To answer this question, we compared the relative importance of the three categories in predicting player retention via training separately on each categorical features. For testing, 5-fold cross validation was used on the unbalanced original dataset with keeping the distribution of target labels. Then, we conducted over-sampling for each split to be balanced. We applied this sampling technique after each split to prevent for same instances to be both in training and test set for each split. We finally measured the area under the ROC curve (AUROC) of logistic regression classifiers using each set of features.Figure <ref> presents the changes of the AUROC values of the three categories over the five phases. The AUROC value is between 0 and 1, where a value of 1 means the prediction model is perfect. A prominent trend we see is the role of achievement features that show the best performance during the early to late phases of the game (i.e., Phase 1 to 4). The social category shows a comparable trend to the achievement category for Phase 1, 2, and 4. This category then becomes the most important in Phase 5 (i.e., the max level players), at which point the other features are no longer important. The temporal features are, for most of the phases, better than random guessing (i.e., AUROC of 0.5), although they do not show a big gain against the baseline. We discuss implications of these findings in the next section. § DISCUSSION & CONCLUSIONMaintaining user base of a substantial size is critical for many companies in running their services. Companies across various industries (e.g., telecommunication companies <cit.>, health app providers <cit.>, and so on) have put their efforts to understand characteristics of people who discontinue services and predict them in advance based on data mining approaches. Game industry and researchers also noticed the importance of the player retention problem, and many studies hence tried to understand player motivations <cit.>, behavioral characteristics <cit.>, and to build prediction models based on studiedcharacteristics <cit.>. However, existing studies did not disentangle user groups and conducted analyses without considering player levels. Because game designs of MMORPGs let players to have a certain amount of activities as level increases <cit.>, game players face different challenges as their level increases and this evolution can affect user retention. As in Figure <ref>, we also observed that social interactions increase as level increases in our dataset. Thus, to precisely understand indicators for player retention, effect of features should be separately measured across different virtual life phases in online games. Another aspect that has received little attention is retaining individuals who have reached the highest level offered by the game. These expert players not only help newbies adapt to the game, but also are a major source of profit for game industry. Therefore, retaining the max-level players is a critical problem. Motivated by these missed opportunities, this research aimed to answer two research questions:(i) what are the indicators for player retention over different phases of players and (ii) how does the relative importance of retention features change over the game phases. Through a series of quantitative analyses using 51,104 individuals based on in-game logs, we have made several keyfindings for the question. Theses results are important for the following reasons. Firstly, we noted that the key indicators of longevity change with player phases. This finding implies that other studies on user behavior also need to consider phases of gamers. Secondly, our findings have practical implications to online game developers, as they need to carefully consider the changing needs of players over various life stages. Game designers may offer fast achievement-oriented scenarios at the beginning, while motivate players to form strong social relationships long before they reach any advanced level. We note that these suggestions are hypothetical, because observations indicate correlation not causality. Future studies can conduct controlled experiments or qualitative studies to further test causal relationship of feature effects. Another implication is that game industry may apply these findings to construct churn prediction models. For example, prediction models could be constructed separately for each phase and hence better capture signals of churning individuals.On top of the above findings, we also found significant indicators observed for certain phases. Playing after midnight was positively associated with the success of Phase 1 to 3, while playing in the early morning is negatively associated with the continued usage. There may exist certain playing patterns, which can be linked to player retention. Also, obtaining rare items or larger amount of money increased the chance of success in low- to medium-level. Owning virtual wealth may be helpful to achieving more levels, or it could make them feel commitment to keep playing online games by having a large amount of wealth in the virtual world. Lastly, we found significant indicators on whom a player talked to. For example, the level of friends was a significant indicator during the initial phases. This finding implies that social network positively affects retention when individuals form interactions with partners of appropriate levels. Because these findings are newly found in this study, predicting player retention can be better improved with further investigation on those variables.This paper has several limitations. Among them is the use of a single data source. Every MMORPG has different game elements and player traits, and hence our findings can not be directly generalized to other online games. Nonetheless, we expect Fairyland Online is representative of a typical MMORPG in terms of the temporal trends, which shows similarity to othergames <cit.>. In the future we hope to replicate the study with other online game logs. Another limitation is that, although we tried extensive features across three different categories based on the related literature, there can exist missing features which might be critically linked to player retention. For example, the number of churned friends was found to be an indicator of player churn in one online game <cit.>. Due to limited data, we could not employ this feature for analysis. In a future work, we hope to look into a longer time period and investigate the effects of other possible indicators including churned friends. Lastly, we did not investigate players who are at their very early stages (i.e., level 1-10). This was because the initial level up in Fairyland Online was fairly easy and there was not much data associated with this time period. However, new-joiners are of great interest to game industry because they are critical to increasing user base and future studies can delve deeper into the behaviors of new-joiners across different games. § ACKNOWLEDGEMENT Cha and Park were supported by the Ministry of Trade, Industry & Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10073144), `Developing machine intelligence based conversation system that detects situations and responds to human emotions'. 10fairylandfig Fairyland Players. <http://bit.ly/2mb19UO>, 2008. [Online; accessed 19-Feb-2017].backiel2016predicting A. Backiel, B. Baesens, and G. Claeskens. Predicting Time-To-Churn of Prepaid Mobile Telephone Customers Using Social Network Analysis. Journal of the Operational Research Society, 2016.bartle1996hearts R. Bartle. Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs. Journal of MUD research, 1(1):19, 1996.blackburn2014stfu J. Blackburn and H. Kwak. STFU NOOB!: Predicting Crowdsourced Decisions on Toxic Behavior in Online Games. In Proceedings of the 23rd International Conference on World Wide Web, pages 877–888. ACM, 2014.borbora2011churn Z. Borbora, J. Srivastava, K. Hsu, and D. Williams. Churn Prediction in MMORPGs Using Player Motivation Theories and an Ensemble Approach. In Proceedings of the International Conference on Privacy, Security, Risk and Trust, pages 157–164. IEEE, 2011.chanel2008boredom G. Chanel, C. Rebetez, M. Bétrancourt, and T. Pun. Boredom, Engagement and Anxiety as Indicators for Adaptation to Difficulty in Games. In Proceedings of the 12th International Conference on Entertainment and Media in the Ubiquitous Era, pages 13–17. ACM, 2008.debeauvais2011if T. Debeauvais, B. Nardi, D. Schiano, N. Ducheneaut, and N. Yee. If You Build It They Might Stay: Retention Mechanisms in World of Warcraft. In Proceedings of the 6th International Conference on Foundations of Digital Games, pages 180–187. ACM, 2011.ducheneaut2006alone N. Ducheneaut, N. Yee, E. Nickell, and R. Moore. “Alone Together?” Exploring the Social Dynamics of Massively Multiplayer Online Games. In Proceedings of the 24th Conference on Human Factors in Computing Systems, pages 407–416. ACM, 2006.ducheneaut2006building N. Ducheneaut, N. Yee, E. Nickell, and R. Moore. Building an MMO With Mass Appeal: A Look at Gameplay in World of Warcraft. Games and Culture, 1(4):281–317, 2006.fields2009connective D. Fields and Y. Kafai. A Connective Ethnography of Peer Knowledge Sharing and Diffusion in a Tween Virtual World. International Journal of Computer-Supported Collaborative Learning, 4(1):47–68, 2009.kawale2009churn J. Kawale, A. Pal, and J. Srivastava. Churn Prediction in MMORPGs: A Social Influence Based Approach. In Proceedings of the Computational Science and Engineering, volume 4, pages 423–428. IEEE, 2009.kim2016proficiency J. Kim, B. C. Keegan, S. Park, and A. Oh. The Proficiency-Congruency Dilemma: Virtual Team Design and Performance in Multiplayer Online Games. In Proceedings of the 34th Conference on Human Factors in Computing Systems, pages 4351–4365. ACM, 2016.Kwak2015linguistic H. Kwak and J. Blackburn. Linguistic Analysis of Toxic Behavior in an Online Video Game, pages 209–217. Springer International Publishing, 2015.kwak2015exploring H. Kwak, J. Blackburn, and S. Han. Exploring Cyberbullying and Other Toxic Behavior in Team Competition Online Games. In Proceedings of the 33rd Conference on Human Factors in Computing Systems. ACM, 2015.mulligan2003developing J. Mulligan and B. Patrovsky. Developing Online Games: An Insider's Guide. New Riders, 2003.nardi2007learning B. Nardi, S. Ly, and J. Harris. Learning Conversations in World of Warcraft. In Proceedings of the 40th Annual Hawaii International Conference on System Sciences, pages 79–79. IEEE, 2007.park2016persistent K. Park, I. Weber, M. Cha, and C. Lee. Persistent Sharing of Fitness App Status on Twitter. In Proceedings of the 19th Conference on Computer Supported Cooperative Work & Social Computing, pages 184–194. ACM, 2016.pittman2007measurement D. Pittman and C. GauthierDickey. A Measurement Study of Virtual Populations in Massively Multiplayer Online Games. In Proceedings of the 6th SIGCOMM Workshop on Network and System Support for Games, pages 25–30. ACM, 2007.shores2014identification K. Shores, Y. He, K. Swanenburg, R. Kraut, and J. Riedl. The Identification of Deviance and Its Impact on Retention in a Multiplayer Game. In Proceedings of the 17th Conference on Computer Supported Cooperative Work & Social Computing, pages 1356–1365. ACM, 2014.steinkuehler2006everybody C. Steinkuehler and D. Williams. Where Everybody Knows Your (Screen) Name: Online Games as “ Third Places”. Journal of Computer-Mediated Communication, 11(4):885–909, 2006.tandoc2015facebook E. Tandoc, P. Ferrucci, and M. Duffy. Facebook Use, Envy, and Depression among College Students: Is Facebooking Depressing? Computers in Human Behavior, 43:139–146, 2015.tibshirani1996regression R. Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.tyack2016appeal A. Tyack, P. Wyeth, and D. Johnson. The Appeal of MOBA Games: What Makes People Start, Stay, and Stop. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, pages 313–325. ACM, 2016.williams2008plays D. Williams, N. Yee, and S. Caplan. Who Plays, How Much, and Why? A Behavioral Player Census of a Virtual World. Journal of Computer Mediated Communication, 13(4):993–1018, 2008.yee2006motivations N. Yee. Motivations for Play in Online Games. CyberPsychology & Behavior, 9(6):772–775, 2006.0abbrv
http://arxiv.org/abs/1702.08005v1
{ "authors": [ "Kunwoo Park", "Meeyoung Cha", "Haewoon Kwak", "Kuan-Ta Chen" ], "categories": [ "cs.SI", "cs.HC" ], "primary_category": "cs.SI", "published": "20170226090859", "title": "Achievement and Friends: Key Factors of Player Retention Vary Across Player Levels in Online Multiplayer Games" }
1Department of Astronomy, School of Astronomy and Space Science, Nanjing University, Nanjing 210023, China; hyf@nju.edu.cn 2Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210023, China The true ground state of hadronic matter may be strange quark matter (SQM). Consequently, the observed pulsars may actually be strange quark stars, but not neutron stars. However, proving or disproving the SQM hypothesis still remains to be a difficult problem, due to the similarity between the macroscopical characteristics of strange quark stars and neutron stars. Here we propose a hopeful method to probe the existence of strange quark matter. In the frame work of the SQM hypothesis, strange quark dwarfs and even strange quark planets can also stably exist. Noting that SQM planets will not be tidally disrupted even when they get very close to their host stars due to their extreme compactness, we argue that we could identify SQM planets by searching for very close-in planets among extrasolar planetary systems. Especially, we should keep our eyes on possible pulsar planets with orbital radius less than ∼ 5.6 × 10^10 cm and period less than ∼ 6100 s. A thorough search in the currently detected ∼ 2950 exoplanets around normal main sequence stars has failed to identify any stable close-in objects that meet the SQM criteria, i.e. lying in the tidal disruption region for normal matter planets. However, the pulsar planet PSR J1719-1438B, with an orbital radius of ∼ 6 × 10^10 cm and orbital period of 7837 s, is encouragingly found to be a good candidate. Searching for Strange Quark Matter Objects in ExoplanetsY. F. Huang1,2, and Y. B. Yu1,2 December 30, 2023 =========================================================§ INTRODUCTION Normal matter is constituted of electrons and nucleons. While there is still no evidence showing that an electron can be further divided, each nucleon is found to be composed of three up and down quarks. Pulsars are generally believed to be neutron stars, which are mainly made up of neutrons that agglomerate together to form a highly condensed state. With a typical mass of ∼ 1.4 M_⊙ and a radius of only ∼ 10 km, the density of neutron stars can reach several times of nuclear saturation density at the center. However, the physics of matter at these extremely high densities is still quite unclear to us <cit.>. For example, hyperons, and baryon resonances (Σ, Λ, Ξ, Δ), and even boson condensates (π^-, K^-) may appear; quark (u, d) deconfinement may also happen. Especially, it has long been suggested that even more exotic state such as the strange quark matter (SQM) may exist inside <cit.>. Strange quark matter is constituted of almost equal numbers of u, d and s quarks, with the s quark number slightly smaller due to its relatively higher static mass. It has been conjectured that SQM may be the true ground state of hadronic matter <cit.>, since its energy per baryon could be less than that of the most stable atomic nucleus such as ^56Fe and ^62Ni.The existence of strange quark stars (shortened as “strange stars”) was consequently predicted based on the SQM hypothesis (also known as the Bodmer-Witten hypothesis) <cit.>. Strange stars could simply be bare SQM objects, or bulk SQM cores enveloped by thin nuclear crusts <cit.>. The possible existence of nuclear crusts makes strange stars very much similar to normal neutron stars for a distant observer <cit.>, which means it is very difficult for us to distinguish these two kinds of intrinsically distinct stars. An interesting suggestion is that strange stars can spin at extremely short period (less than ∼ 1 ms) <cit.> due to the large shear and bulk viscosity of SQM <cit.>, while the minimum spin period (P_ spin) of normal neutron stars can hardly reach the submillisecond range <cit.>. It is thus suggested that P_ spin < 1 ms can be used as a criteria to identify a strange star <cit.>. However, not all strange stars should necessarily spin at such an extreme speed. Further more, the lifetime for a strange star to maintain a submillisecond spin period should be very short even if it has an initial period of P_ spin < 1 ms at birth, due to very strong electromagnetic emission of the fast spinning dipolar magnetic field. On the technological aspect, it is also difficult to detect submillisecond pulsars observationally. In fact, according to the ATNF pulsar catalogue (web site: www.atnf.csiro.au/people/pulsar/psrcat), the record for the smallest spin period of pulsars is still ∼ 1.40 ms and only about 80 pulsars have periods less than 3 ms among all the ∼ 2560 pulsars observed so far. All these factors make this method impractical at the moment.It has also been noted that the mass-radius relations are different for these two kinds of stars. According to the simplest MIT Bag model <cit.>, it is M ∝ R^3 for strange stars, but it is M ∝ R^-3 for neutron stars <cit.>. Unfortunately, this method is severely limited by the fact that the masses and radii of these compact stars cannot be measured accurately enough so far. The fact that strange stars and neutron stars have almost similar radius at the typical pulsar mass of 1.4 M_⊙ <cit.> adds additional difficulties to the application <cit.>. Several other methods have also been suggested, based on the different cooling behaviors <cit.> or the gravitational wave emissions <cit.>. But either because the difference between strange stars and neutron stars is subtle and inconclusive, or because the practice is extremely difficult currently, we still do not have a satisfactory method to discriminate them after more than 40 years of extensive investigations <cit.>.It is interesting to note that small chunks of SQM with baryon number lower than 10^7 can stably exist according to the SQM hypothesis. Consequently, there is effectively no limitation on the minimum mass of strange stars. It means the SQM version of white dwarfs, i.e. strange dwarfs, can exist, and even strange planets may be present in the Universe <cit.>. Noting that strange planets can spiral very close to their host strange stars without being tidally disrupted owing to their extreme compactness, Geng et al (2015) suggested that these merger systems would serve as new sources of gravitational wave bursts and could be used as an effective probe for SQM. This is a very hopeful new method. The only concern is that it would take an extremely long time for a strange planet to have a chance to merge with its host. According to Geng et al.'s estimation, the event rate detected by even the next generation gravitational wave experiment such as the Einstein Telescope would not exceed a few per year <cit.>. Thus the goal is still far from being reachable in the near future.In this study, we suggest that we could probe the existence of SQM by searching for close-in planets among extrasolar planetary systems. This method can significantly increase the opportunity for success if the SQM hypothesis is correct. § EXTREMELY SMALL TIDAL DISRUPTION RADIUS Strange planets are SQM objects of planetary masses. They can be used to test the SQM hypothesis. The basic idea relies on the gigantic difference between the tidal disruption radius for an SQM planet and that for a normal matter one.When a planet orbits around its host star, different gravitational force (from the host star) will be exerted on different parts of the planet due to their slight difference in distance with respect to the host. This is the so called tidal effect. The tidal force tends to tear the planet apart, but it can be resisted by the self-gravity of the planet when the two objects is still far away. When the two objects approach each other, the tidal effect will become stronger <cit.>. There exists a critical distance, i.e. the so called tidal disruption radius (r_ td), at which the tidal force is exactly balanced by the self-gravity of the planet <cit.>. If the distance is smaller than the tidal disruption radius (r_ td), the tidal force will dominate and the planet will be completely broken up. An analytical expression for r_ td has been derived as r_ td≈ (6M/πρ)^1/3 , where M is the mass of the central host star and ρ is the density of the planet <cit.>.SQM planets are extremely compact and their densities are typically ∼ 4 × 10^14 g/cm^3. As a result, the tidal disruption radius for strange planets can be scaled as,r_ td( SQM) ≈ 1.5 × 10^6(M/1.4 M_⊙)^1/3(ρ/4 × 10^14 g/cm^3)^-1/3cm.We see that the tidal disruption radius for strange planets is as small as r_ td( SQM) ∼1.5 × 10^6 cm. Thus a strange planet will retain its integrity even when it almost comes to the surface of the central host strange star.On the contrary, the tidal disruption radius for normal matter planet is usually much larger. For example, for typical planets with a density of 8 g/cm^3, the tidal disruption radius is ∼ 8.7 × 10^10 cm. It means typically a normal matter planet will be disrupted at a distance of ∼ 10^11 cm, and we will in no ways be able to see a normal planet orbiting around its host at a distance much less than this value. Even when we take the planet density as high as 30 g/cm^3, the tidal disruption radius will still be as large as ∼ 5.6 × 10^10 cm.The analyses above remind us that we could test the SQM hypothesis through exoplanet observations: if we detected a close-in exoplanet that lies in the tidal disruption region for normal matter (i.e., with the orbital radius significantly less than ∼ 5.6 × 10^10), it must be a strange planet.Note that when a solid asteroid (of mass m, radius r and density ρ) gets elongated in the radial direction in the centripetal gravitational field of its host (mass M), the elongation stress inside the object can also help to resist the tidal force. It will lead to a reduced tidal disruption radius. To consider this effect, we can approximate the elongated asteroid by a right circular cylinder of length 2 r. The elongation stress will be maximal at the asteroid center, which is <cit.>,s_ c=∫_0^r2 G M/d^3ρ l dl = G M ρ r^2/d^3,where d is the distance between the asteroid and the host star. Assuming that the strength of the material is s and let s_ c equal s, we can derive the tidal disruption radius as <cit.>r_ td=(G M ρ r^2 / s)^1/3≈ 2.4 × 10^9 m_18^2/9 s_10^-1/3(ρ/8 g/cm^3)^1/9(M/1.4 M_⊙)^1/3cm,where m_18 = m / 10^18 g and s_10 = s / 10^10 dyn/cm^2. This equation is applicable if the elongation stress dominates over the self-gravity. However, for an Fe-Ni planet of the Earth mass of M_⊕ = 6.0 × 10^27 g, density ρ = 8 g/cm^3, radius r = 5.6 × 10^8 cm, and strength s = 10^10 dyn/cm^2, we find r_ td = 3.6 × 10^11 cm for M = 1.4 M_⊙ from the above equation. It is significantly larger than that derived from Equation (1). If the density of the planet is higher (which is of more interest in our study), the tidal disruption radius will be even larger. Thus for relatively larger planets studied here, this effect is not significant and can be safely omitted. § EXAMINING THE OBSERVED EXOPLANETS Exoplanets can be detected in various ways <cit.>. Currently the most productive method is through transit photometry, i.e. monitoring the periodic brightness variation of the host star induced by the transit of a planet across the stellar disk. In this aspect, the KEPLER mission is undoubtedly the most successful project <cit.>. KEPLER is a space-based optical telescope of 0.95 m aperture. It was launched in 2009 by NASA to monitor ∼ 170,000 stars over a period of four years. With a 105 square degree field-of-view and ∼ 10 ppm photometry accuracy, it successfully detected over 4600 planetary candidates and confirmed over a thousand exoplanets. An important advantage of the transit photometry method is that it is even possible to measure the size of the planet, so that its density can be derived <cit.>. In some special cases, even the atmosphere of the planet can be probed <cit.>.Another widely used method is through radial velocity measurement, which inspects the regular radial velocity variations of the host star caused by the orbital movement of the planet. Long-term accuracies of the host's radial motions of several meters per second are needed, which can effectively yield all orbital elements of the planets except the orbital inclination. Thirdly, for pulsar planets, timing observation is an effective method since the orbital motion of planets will affect the arrival times of the pulsar's radio pulses. In fact, the first extrasolar planet was detected orbiting around PSR B1257+12 just through this method <cit.>. Finally, several other not-so-commonly-used methods, such as astrometry, gravitational microlensing, and direct imaging, have also been successfully applied and led to the detection of a small portion of the currently known exoplanets.Due to continuous improvements in the observational techniques above, the number of observed exoplanets is expanding quickly in recent years <cit.>. Several catalogues are available for exoplanets, such as the Exoplanet Orbit Database (shortened as EOD hereafter) at exoplanets.org, the Extrasolar Planets Encyclopaedia at exoplanet.eu, the NASA Exoplanet Archive at exoplanetarchive.ipac.caltech.edu, and the KEPLER exoplanet catalogue at archive.stsci.edu/kepler. In this study, we use the EOD database to carry out the statistics. As to 2017 May 27, there are 5288 planets in the catalogue, of which 2950 are confirmed planets and 2338 are candidates. Among the confirmed planets, 322 samples are tabulated with inferred density, and additional 2108 samples are tabulated with both mass and radius values so that their densities can be calculated. The planet masses are available for 2937 planets, and the orbital radii are given for 2925 samples. With so many exoplanets in hand, we can try to search for possible SQM objects in them.Since the planet density is a key factor that determines the tidal disruption radius, in Figure 1 we first plot the density distribution for all the confirmed exoplanets with densities available (2430 objects in total). The densities of most exoplanets (about 99% of all the samples) are less than 10  g/cm^3. Only 4 exoplanets are listed as denser than 30 g/cm^3. Note that these high-density planets (with ρ > 30 g/cm^3 ) generally have large error bars, thus their density measurements are highly uncertain. Figure 1 indicates that for the density of normal hadronic planets, we can take 30 g/cm^3 as a reasonable upper limit.According to Equation (1), the tidal disruption radius is r_ td≈ 5.6 × 10^10 cm when the planet density is 30 g/cm^3 and the host star mass is 1.4 M_⊙. So, a direct strategy is to see whether there are any close-in exoplanets with the orbital radius significantly less than the critical radius of 5.6 × 10^10 cm. In Figure 2, we plot the distribution of orbital radius (a) for all the confirmed exoplanets (2925 objects in total). Typically, the orbital radii are between 0.03 — 10 AU. For exoplanets around normal main sequence stars, only 3 objects have radii less than 0.01 AU. The smallest radius is 0.006 AU (9 × 10^10 cm), but even this value is still well above the critical tidal disruption radius of 5.6 × 10^10 cm for a very dense object of ρ∼ 30 g/cm^3. Thus no clear clues pointing to the existence of strange planets around normal main sequence stars are revealed from this plot. Since the tidal disruption radius depends on both the planet density and the host star mass, it is more reasonable to evaluate the closeness of planets by comparing their orbital radius with the corresponding tidal disruption radius. We thus define the closeness of planets as a/r_ td. For the planets with densities available (2430 objects, around main sequence stars), we have calculated their tidal disruption radii (r_ td) and the corresponding closeness parameter. Fig. 3a illustrates the mass distribution vs. the closeness of these planets. It can be clearly seen that all the planets lie outside the tidal disruption region, which proves a > r_ td as a definite limitation for the survival of planets.For the remaining 520 exoplanets without a density measurement (as listed in the EOD database), we have assumed a typical value of 8 g/cm^3 for them and plotted their distributions in Fig. 3b. Again we see that no planets lie within the tidal disruption region.From Fig. 3, it can be seen that no clues pointing toward the existence of any SQM objects can be found in the EOD database. This is not an unexpected result. SQM planets, if really exist, are not likely found to be orbiting around normal main sequence stars, but should be around compact stars (especially, strange stars). Thus we should pay special attention to exoplanets around pulsars. Note that for pulsar planets, the transit photometry method is not effect and we will mainly rely on the pulsar timing method to detect them. In this case, the densities of the planets are usually unavailable.In fact, at least 5 planets have been detected orbiting around three pulsars <cit.>, i.e. PSR B1257+12 <cit.>, PSR J1719-1438 <cit.>, and PSR B1620-26 <cit.>. PSR B1257+12 has three planets, and each of the other two planets has one planetary companion. All these planets are detected through the pulsar timing method, thus no radius measurements are directly available for them. In Fig. 3b, we have also plot the 5 pulsar planets, specially marked them by star symbols. Again we assumed a typical density of 8 g/cm^3 in the plot. While four pulsar planets are safely beyond the tidal disruption region, we do notice that one planet lies in the disruption region (with a/r_ td = 0.69). It is associated with PSR J1719-1438, a 5.7-millisecond pulsar, with an orbital radius of ∼ 6.0 × 10^10 cm and orbital period of ∼ 2.2 hours. Interestingly, this problem has already been noticed by Bailes et al., who argued that this companion must be denser than 23 g/cm^3 to survive the strong tidal force of its host <cit.>. They even went further to suggest that the planetary companion may actually be a carbon white dwarf. However, with a mass comparable to that of the Jupiter, it will be too rare for a white dwarf to have such an ultralow mass. A more reasonable suggestion has been made by Horvath, who argued that it must be an exotic quark object <cit.>. Our current study strongly supports Horvath's suggestion, i.e. the planet of PSR J1719-1438 is a possible SQM candidate. It is thus very encouraging that while only 5 pulsar planets are detected, we already have one SQM candidate among them. It hints us that close-in exoplanets would be a hopeful and powerful tool to test the SQM hypothesis.§ DETECTABILITY OF CLOSE-IN PULSAR PLANETS Searching for close-in exoplanets around pulsars should be the main direction of our future efforts. Due to their extreme closeness, these planets will only exert a very small radial velocity perturbation on the central compact host, which will be difficult to be found by pulsar timing observations. Next, we give an estimate on the lower mass limit of the planets that could be detected with current observational techniques.Let us consider a planet of mass m orbiting around a pulsar (M). In half of the orbital period, the pulsar will have a positive radial velocity perturbation with respect to us, owing to the existence of the small companion, while in the other half orbit, it has a negative velocity perturbation. As a result, the topocentric time-of-arrival (TOA) of its clock-like pulses will systematically deviate from normal rhythm regularly. The accumulated TOA deviation can be as large as several milliseconds in each of the half orbit and can be potentially detected through long-term timing observations. In fact, assuming a circular orbit, the planet mass is connected with the semi-amplitude Δ t of the corresponding TOA variations as <cit.>msini≈ 21.3 M_⊕ (Δ t/1  ms) (P_ orb/1  day)^-2/3 (M/1.4 M_⊙)^2/3,where P_ orb is the planet's orbital period, i is the orbital inclination, and M_⊕ = 6.0 × 10^27 g is the Earth mass.The pulsar timing method essentially is also trying to measure the radial velocity perturbation. By accumulating the TOA residuals induced by the radial velocity variation in half of the orbit and with the microsecond precision of timing observations, it can equivalently measure the radial velocity perturbation at an unprecedented accuracy of ∼ 1 cm/s. As a contrast, traditional radial velocity measurement through optical spectroscopy can only achieve an accuracy of ∼ 1 m/s currently. Timing observation is thus an ideal method that could be effectively used to search for possible close-in strange planets around pulsars.In view of the radial velocity variation (Δ V) of the host pulsar, Equation (4) can be conveniently expressed asmsini≈ (M a/G)^1/2Δ V ≈ 0.0034 M_⊕(M/1.4 M_⊙)^1/2(a/10^10cm)^1/2Δ V/1 cm/s, where G is the gravitational constant. Taking 30 g/cm^3 as a secure upper limit for the density of typical normal planets, we get the critical tidal disruption radius as r_td≈ 5.6 × 10^10 cm (Section 2). We thus need to search for strange planets with orbital radii smaller than this value. In fact, all the currently detected exoplanets (except the pulsar planet PSR J1719-1438B) lie far beyond this region (Fig. 2). From Equation (5), we see that at the limiting radius (a ∼ 5.6 × 10^10 cm), all planets more massive than ∼ 0.008 M_⊕ can be detected by current pulsar timing observations. For more close-in strange planets, even less massive SQM planets can also be detected. Taking typical values of i = 45^ o, Δ V = 1 cm/s, and M = 1.4 M_⊙, we have plot the limiting mass of planets that could be detected in Fig. 4. The figure gives us the encouraging information that close-in strange planets need not to be very massive to be detected with our current observational techniques.Lying in the tidal disruption region for normal matter, these strange planets will also have very small orbital periods. According to the Kepler's law, the radius and period of the orbit are related bya^3/P_ orb^2≈G M/4 π^2.At the limiting radius of r_td≈ 5.6 × 10^10 cm, the period is P_ orb≈ 6100 s. For more close-in orbits, the periods will be even smaller. In Fig. 5, the relation between P_ orb and a is plotted for these close-in orbits. From this figure, we see that in addition to the criterion of a < 5.6 × 10^10 cm, the small orbit period of P_ orb < 6100 s is another specific feature for SQM planets. PSR J1719-1438B has an orbital radius of ∼ 6 × 10^10 cm and orbital period of 7837 s. Its orbital parameters are slightly above the SQM criteria, but it still can be regarded as a good candidate. § CONCLUSIONS AND DISCUSSION Discriminating strange stars from neutron stars observationally is an important but challenging problem <cit.>. A few possible methods have previously been suggested in the literature, but they are either inconclusive or impractical currently. We here propose a unique method to test the SQM hypothesis: searching for close-in exoplanets with very small orbital radius (a < 5.6 × 10^10 cm) and very small orbital period (P_ orb < 6100 s). It is based on the fact that SQM planets are extremely compact and can survive even when they are in the tidal disruption region for normal hadronic planets. We have examined all the detected exoplanets around main sequence stars and found no clues pointing toward the existence of SQM objects among them. However, the pulsar planet PSR J1719-1438B, which has an orbital radius of ∼ 6 × 10^10 cm and orbital period of 7837 s, is found to be an interesting candidate. We stress that in the future, such efforts should be made mainly on exoplanets around pulsars, since SQM planets are most likely associated with such compact stars (which themselves should also be strange quark stars in this case). Theoretically, SQM planets can be formed in a few ways. First, at the birth of an SQM star (either from the phase transition of a massive neutron star, or from the merge of two neutron stars), plenty of small SQM nuggets should be ejected. These SQM nuggets will “contaminate” the surrounding normal planets and convert them into SQM planets. It means that if the Bodmer-Witten hypothesis is correct so that neutron stars are actually strange stars, then strange planets should also be quite common. Second, SQM clumps of planetary masses may be ejected from a strange quark star at its birth, because the newly formed SQM host star should be hot and highly turbulent, giving birth to high-velocity eddies <cit.>. These clumps may finally become planets around the host star due to its deep gravitational potential well. Interestingly, the SQM planets formed in this way are most likely close-in, since the ejection may not be too fierce. Third, planetary SQM objects may be directly formed at an early stage of our Universe, i.e. the so-called quark phase stage, when the mean density of the Universe is extremely high <cit.>. Some of these SQM objects may survive and be captured by compact stars (and even by main sequence stars) to form planetary systems at later stages. With an unprecedented equivalent radial velocity accuracy of ∼ 1 cm/s, the pulsar timing method could reveal close-in planets as small as ∼ 10^-2 M_⊕. We appeal to radio astronomers to pay more attention on searching for such close-in exoplanets in the future. If found, it will lead to a final solution for the long-lasting and highly disputed fundamental problem.The authors thank Jin-Jun Geng and Li-Jun Gou for helpful discussions. This work was supported by the National Natural Science Foundation of China with Grant No. 11473012, by the National Basic Research Program of China with Grant No. 2014CB845800, and by the Strategic Priority Research Program of the Chinese Academy of Sciences “Multi-waveband Gravitational Wave Universe” (Grant No. XDB23040000). This research has made use of the Exoplanet Orbit Database and the Exoplanet Data Explorer at exoplanets.org. [Adriani et al.(2015)]Adriani15 Adriani, O., Barbarino, G. C., Bazilevskaya, G. A., et al. 2015, , 115, 111101[Alcock et al.(1986)]Alcock86 Alcock, C., Farhi, E., & Olinto, A. 1986, , 310, 261[Andersson et al.(2002)]Andersson02 Andersson, N., Jones, D. I., & Kokkotas, K. D. 2002, , 337, 1224[Armstrong et al.(2016)]Armstrong16 Armstrong, D. J., de Mooij, E., Barstow, J., et al. 2016, Nat. Astron., 1, 0004 [Backer et al.(1993)]Backer93 Backer, D. C., Foster, R. S., & Sallmen, S. 1993, , 365, 817[Bailes et al.(2011)]Bailes11 Bailes, M., Bates, S. D., Bhalerao, V., et al. 2011, Science, 333, 1717 [Bauswein et al.(2009)]Bauswein09 Bauswein, A., Janka, H. T., Oechslin, R., et al. 2009, , 103, 011101 [Bauswein et al.(2010)]Bauswein10 Bauswein, A., Oechslin, R., & Janka, H. T. 2010, , 81, 024012[Baym et al.(1971)]Baym71 Baym, G., Pethick, C., & Sutherland, P. 1971, , 170, 299[Bhattacharyya et al.(2016)]Bhattacharyya16 Bhattacharyya, S., Bombaci, I., Logoteta, D., Thampan, A. V. 2016, , 457, 3101 [Bodmer(1971)]Bodmer71 Bodmer, A. R. 1971, , 4, 1601[Borucki(2016)]Borucki16 Borucki, W. J. 2016, Rep. Prog. Phys., 79, 036901[Cheng et al.(1998)]Cheng98 Cheng, K. S., Dai, Z. G., & Lu, T. 1998, Int. J. Mod. Phys. D, 7, 139[Colgate & Petschek(1981)]Colgate81 Colgate, S. A., & Petschek, A. G. 1981, , 248, 771[Cottingham et al.(1994)]Cottingham94 Cottingham, W. N., Kalafatis, D., & Vinh Mau, R. 1994, , 73, 1328[Coughlin et al.(2016)]Coughlin16 Coughlin, J. L., Mullally, F., Thompson, S. E., et al. 2016, Astrophys. J. Suppl. Ser., 224, 12[de Avellar & Horvath(2010)]Avellar10 de Avellar, M. G. B., & Horvath, J. E. 2010, Int J. Mod. Phys. D, 19, 1937[Drago et al.(2014)]Drago14 Drago, A., Lavagno, A., & Pagliara, G. 2014, , 89, 043014[Drago & Pagliara(2016)]Drago16 Drago, A., & Pagliara, G. 2016, Eur. Phys. J. A, 52, 41[Farhi & Jaffe(1984)]Farhi84 Farhi, E., & Jaffe, R. L. 1984, , 30, 2379[Friedman et al.(1989)]Friedman89 Friedman, J. L., Ipser, J. R., & Parker, L. 1989, , 62, 3015[Frieman & Olinto(1989)]Frieman89 Frieman, J. A., & Olinto, A. V. 1989, , 341, 633[Geng et al.(2015)]Geng15 Geng, J. J., Huang, Y. F., & Lu, T. 2015, , 804, 21[Glendenning(1989)]Glendenning89 Glendenning, N. K. 1989, , 63, 2629[Glendenning et al.(1995)]Glendenning95 Glendenning, N. K., Kettner, C., & Weber, F. 1995, , 74, 3519 [Gu et al.(2003)]Gu03 Gu, P. G., Lin, D. N. C., & Bodenheimer, P. H. 2003, , 588, 509[Han et al.(2014)]Han14 Han, E., Wang, S. X., Wright, J. T., et al. 2014, Pub. Astron. Soc. Pac., 126, 827[Hills(1975)]Hills75 Hills, J. G. 1975, , 254, 295[Horvath(2012)]Horvath12 Horvath, J. E. 2012, Research in Astronomy and Astrophysics, 12, 813[Itoh(1970)]Itoh70 Itoh, N. 1970, Progress of Theoretical Physics, 44, 291[Jaranowski et al.(1998)]Jaranowski98 Jaranowski, P., Królak, A., & Schutz, B. F. 1998, , 58, 063001[Jones & Andersson(2002)]Jones02 Jones, D. I., & Andersson, N. 2002, , 331, 203[Kristian et al.(1989)]Kristian89 Kristian, J., Pennypacker, C. R., Morris, D. E., et al. 1989, , 338, 234[Krivoruchenko & Martem'ianov(1991)]Krivoruchenko91 Krivoruchenko, M. I., & Martem'ianov, B. V. 1991, , 378, 628[Lattimer & Prakash(2007)]Lattimer07 Lattimer, J. M., & Prakash, M. 2007, Phys. Rep., 442, 109[Lattimer et al.(1994)]Lattimer94 Lattimer, J. M., van Riper, K. A., Prakash, M., Prakash, M. 1994, , 425, 802[Lindblom & Mendell(2000)]Lindblom00 Lindblom, L., & Mendell, G. 2000, , 61, 104003[Lorimer(2008)]Lorimer08 Lorimer, D. R. 2008, Living Reviews in Relativity, 11, 8[Madsen(1998)]Madsen98 Madsen, J. 1998, , 81, 3311[Mannarelli et al.(2015)]Mannarelli15Mannarelli, M., Pagliaroli, G., Parisi, A., Pilo, L., Tonelli, F. 2015, , 815, 81[Martin et al.(2016)]Martin16 Martin, R. G., Livio, M., & Palaniswamy, D. 2016, , 832, 122[Moraes & Miranda(2014)]Moraes14 Moraes, P. H. R. S., & Miranda, O. D. 2014, , 445, L11[Özel & Freire(2016)]Ozel16 Özel, F., & Freire, P. 2016, , 54, 401[Page & Applegate(1992)]Page92 Page, D., & Applegate, J. H. 1992, , 394, L17[Panei et al.(2000)]Panei00 Panei, J. A., Althaus, L. G., & Benvenuto, O. G. 2000, A&A, 353, 970[Perryman(2000)]Perryman00 Perryman, M. A. C. 2000, Rep. Prog. Phys., 63, 1209[Pizzochero(1991)]Pizzochero91 Pizzochero, P. M. 1991, , 66, 2425[Sawyer(1989)]Sawyer89 Sawyer, R. F. 1989, Phys. Lett. B, 233, 412[Sigurdsson et al.(2003)]Sigurdsson03 Sigurdsson, S., Richer, H. B., Hansen, B. M., et al. 2003, Science, 301, 193 [Terazawa(1979)]Terazawa79 Terazawa, H. 1979, INS-Report 336[Wang & Lu(1985)]Wang85 Wang, Q. -D., & Lu, T. 1985, Acta Astrophysica Sinica, 5, 59[Weber(2005)]Weber05 Weber, F. 2005, Progress in Particle and Nuclear Physics, 54, 193[Witten(1984)]Witten84 Witten, E. 1984, , 30, 272[Wolszczan(2012)]Wolszczan12 Wolszczan, A. 2012, New Astronomy Reviews, 56, 2[Wolszczan & Frail(1992)]Wolszczan92 Wolszczan, A., & Frail, D. A. 1992, , 355, 145[Xu & Wu(2003)]Xu03 Xu, R. X., & Wu, F. 2003, Chin. Phys. Lett., 20, 806[Xu et al.(2001)]Xu01 Xu, R. X., Zhang, B., & Qiao, G. J. 2001, Astroparticle Phys., 15, 101
http://arxiv.org/abs/1702.07978v4
{ "authors": [ "Y. F. Huang", "Y. B. Yu" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170226025432", "title": "Searching for strange quark matter objects in exoplanets" }
roman EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)pdflatexCERN-EP-2017-026LHCb-PAPER-2016-056December 30, 2023Observation of the decay →ϕ and evidence for →The LHCb collaboration[Authors are listed at the end of this paper.]A study of →ϕ and → decays is performed using pp collision data corresponding to an integrated luminosity of 3.0, collected with the LHCb detector in Run 1 of the LHC. The observation of the decay →ϕ is reported, where the meson is reconstructed in the pp̅, K^+K^-π^+π^-, π^+π^-π^+π^- and K^+K^-K^+K^- decay modes and the ϕ(1020) in the K^+ K^- decay mode. The decay →ϕ is used as a normalisation channel. Evidence is also reported for the decay →, where the meson is reconstructed in the pp̅ decay mode, using the decay → as a normalisation channel. The measured branching fractions are ℬ (B^0_s→η_cϕ) = (5.01 ± 0.53 ± 0.27 ± 0.63 ) × 10^-4 ,ℬ (B^0_s→η_cπ^+ π^-) = (1.76 ± 0.59 ± 0.12 ± 0.29 ) × 10^-4 , where in each case the first uncertainty is statistical, the second systematic and the third uncertainty is due to the limited knowledge of the external branching fractions. Published in JHEP 07 (2017) 021  CERN on behalf of the collaboration, licence http://creativecommons.org/licenses/by/4.0/CC-BY-4.0. plain arabic§ INTRODUCTIONWhen a meson decays through the b̅→c̅ c s̅ process, interference between the direct decay amplitude, and the amplitude after - oscillation, gives rise to a -violating phase, . This phase is well predicted within the Standard Model (SM) <cit.> and is sensitive to possible contributions from physics beyond the SM <cit.>. The phase is best measured using the “golden” channel[The simplified notation ϕ andare used to refer to the ϕ(1020) and the (1S) mesons throughout this article.] →ϕ <cit.> and the precision of this measurement is expected to be dominated by its statistical uncertainty until the end of LHC running.In addition to →ϕ, other modes have been used to constrain :  <cit.>,  <cit.>, and ψ(2S)ϕ <cit.>.In this paper, the first study of →η_cϕ and →η_c decays is presented.[The use of charge-conjugate modes is implied throughout this article.] These decays also proceed dominantly through a b̅→c̅ c s̅ tree diagram as shown in Fig. <ref>. Unlike in decays, the ϕ final state is purely -even, so that no angular analysis is required to measure the mixing phase ϕ_s.However, the size of the data sample recorded by the LHCb experiment in LHC Run 1 is not sufficient to perform time-dependent analyses of →η_cϕ and →η_c decays.Instead, the first measurement of their branching fractions is performed. No prediction is available for either (→ϕ) or (→).Assuming ℬ(→ϕ)/ℬ(→ϕ)=ℬ(→ K^0)/ℬ(→ K^0) = ℬ(→)/ℬ(→) allows (→ϕ) and (→) to be estimated. From the known values of (→), (→), (→ϕ) and (→) <cit.>, one finds(→ϕ) =𝒪(10^-3) , (→) =𝒪(10^-4). The measurements presented in this paper are performed using a dataset corresponding to 3of integrated luminosity collected by the LHCb experiment in pp collisions during 2011 and 2012 at centre-of-mass energies of 7and 8, respectively. The paper is organised as follows: Section <ref> describes the LHCb detector and the procedure used to generate simulated events;an overview of the strategy for the measurements of (→ϕ) and (→) is given in Sec. <ref>;the selection of candidate signal decays is described in Sec. <ref>; the methods to determine the reconstruction and selection efficiencies are discussed in Sec. <ref>. Section <ref> describes the fit models.The results and associated systematic uncertainties are discussed in Secs. <ref> and <ref>. Finally, conclusions are presented in Sec. <ref>.§ DETECTOR AND SIMULATIONThe detector <cit.> is a single-arm forward spectrometer covering therange 2<η <5, designed for the study of particles containing or quarks.The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the pp interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet. The tracking system provides a measurement of momentum, , of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200. The minimum distance of a track to a primary vertex (PV), the impact parameter (IP), is measured with a resolution of (15+29/), where is the component of the momentum transverse to the beam, in . Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors.Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter.Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The online event selection is performed by a trigger <cit.>, which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction.Samples of simulated events are used to determine the effects of the detector geometry, trigger, and selection criteria on the invariant-mass distributions of interest for this paper. In the simulation, pp collisions are generated using  <cit.> with a specific configuration <cit.>.The decay of the meson is described by  <cit.>, which generates final-state radiation using  <cit.>.The interaction of the generated particles with the detector, and its response, are implemented using the toolkit <cit.> as described in Ref. <cit.>. Data-driven corrections are applied to the simulation to account for the small level of mismodelling of the particle identification (PID) performance <cit.>. In the simulation the reconstructed momentum of every track is smeared by a small amount in order to better match the mass resolution of the data. § ANALYSIS STRATEGYIn the analysis of →ϕ decays, the ϕ meson is reconstructed in thefinal state and the meson is reconstructed in the pp̅, K^+K^-π^+π^-, π^+π^-π^+π^- and K^+K^-K^+K^- final states.For clarity, the three four-body final states are referred to as 4h throughout the paper. In determining the branching fraction, the decay →ϕ is used as a normalisation channel, where themeson is reconstructed in the same decay modes as the meson. A similar strategy is adopted for the measurement of the branching fraction of → decays.However, due to the higher expected level of combinatorial background compared to →ϕ decays, the and mesons are reconstructed only in the p p̅ final state in the measurement of (→). In both analyses, a two-stage fit procedure is performed.In the first stage, unbinned extended maximum likelihood (UML) fits are performed to separate signal candidates from background contributions.For the →(→ pp̅) decay the fit is done to the pp̅ mass distribution, while for the decays →(→ pp̅) ϕ(→) and →(→ 4h) ϕ(→) it is made to the two-dimensional pp̅ versusor 4h versusmass distributions, respectively. The likelihood function isL(𝐍, 𝐚) = e^-∑_j N_j/n!∏_l=1^n( ∑_j N_jP_j(m;a) ) , where j stands for the event species, N_j is the corresponding yield and N is the vector of yields N_j, a is the vector of fitted parameters other than yields, n is the total number of candidates in the sample, and P_j(m) is the probability density function (PDF) used to parametrise the set of invariant-mass distributions m considered.The package<cit.> is used to construct the negative log-likelihood function (NLL), which is minimised using Minuit <cit.>. Using information from these fits, signal weights for each candidate, ω_l, are obtained using the _s Plot technique <cit.>.In the second stage, for → pp̅ candidates a weighted UML fit is made to the pp̅ invariant-mass spectrum, and weighted UML fits of the pp̅ and the 4h invariant-mass spectra are done for → pp̅ϕ and → 4h ϕ candidates, respectively, to disentangleandcandidates from nonresonant (NR) and remaining background contributions, as described in Sec. <ref>. For the weighted fits, the NLL function is given by -ln L(𝐍, 𝐚)= ζ∑_j N_j -ζ∑_l ω_l ln(∑_j N_jP_j(m;a) ) + ln(n!), where ζ = ∑_l ω_l / ∑_l ω_l^2 ensures proper uncertainty estimates from the weighted likelihood fit <cit.>. For the observed numbers of η_c andcandidates in final state f, N_,f and N_,f, the measured branching fraction is ℬ(→ X) = N_,f/N_,f×ℬ(→ X)×ℬ(→ f)/ℬ(→ f)×ε()_f/ε()_f, where X refers to either the ϕ meson or thepair. The branching fractions ℬ(→ϕ), ℬ(→), ℬ(→ f) and ℬ(→ f) are taken from Ref. <cit.>, and the efficiency correction factors, ε, are obtained from simulation. In order to maximise the sensitivity to ℬ(→ϕ), a simultaneous fit to the pp̅ and 4h invariant-mass spectra is performed.§ EVENT SELECTIONA common strategy for the event selection, comprising several stages, is adopted for all final states.First, online requirements are applied at the trigger level, followed by an initial offline selection in which relatively loose criteria are applied.Boosted decision trees (BDTs) <cit.>, implemented using the TMVA software package <cit.>, are then used to further suppress the combinatorial background arising from random combinations of tracks originating from any PV.Finally, the requirements on the output of the BDTs and on the PID variables are simultaneously optimised for each final state, to maximise the statistical significance of the signal yields. At the hardware trigger stage, events are required to have a muon with high or a hadron with high transverse energy in the calorimeters. The software trigger requires a two-, three- or four-track secondary vertex (SV) with a significant displacement from any PV.At least one charged particle must have a large transverse momentum and be inconsistent with originating from a PV.A multivariate algorithm <cit.> is used for the identification of secondary vertices consistent with the decay of a b hadron into charged hadrons.In addition, for the 4h final states, an algorithm is used to identify inclusive ϕ→ K^+ K^- production at a secondary vertex, without requiring a decay consistent with a b hadron. In the initial stage of the offline selection, candidates forand(→ 4h ) decays are required to have four (six) good quality, high-tracks consistent with coming from a vertex that is displaced from any PV in the event.Loose PID criteria are applied, requiring the tracks to be consistent with the types of hadrons corresponding to the respective final states. In addition, thecandidates, formed by the combination of the final-state candidates, are required to originate from a PV by requiring a small angle between thecandidate momentum vector and the vector joining this PV and thedecay vertex, and a small , which is defined as the difference in the vertex-fit of the considered PV reconstructed with and without the candidate. When forming thecandidates for → pp̅ and → pp̅ decays, the pp̅ mass resolution is improved by performing a kinematic fit <cit.> in which the candidate is constrained to originate from its associated PV (that with the smallest value offor the ), and its reconstructed invariant mass is constrained to be equal to the known value of the B^0_s mass <cit.>.No significant improvement of the 4h mass resolution is observed for → 4h decays. In order to reduce the combinatorial background, a first BDT, based on kinematic and topological properties of the reconstructed tracks and candidates, is applied directly at the initial stage of the offline selection of candidate → 4h K^+K^- decays.It is trained with events from dedicated simulation samples as signal and data from the reconstructed high-mass sidebands of the candidates as background.In the second step of the selection, the offline BDTs are applied. They are trained using the same strategy as that used for the training of the first BDT.The maximum distance of closest approach between final-state particles, the transverse momentum, and the χ^2_ IP of each reconstructed track, as well as the vertex-fit χ^2 per degree of freedom, the χ^2_ IP, and the pointing angle of the candidates are used as input to the BDT classifiers used to select candidate → pp̅ and → pp̅ decays. For the pp̅ K^+K^- final state, the direction angle, the flight distance significance and the χ^2_ IP of the reconstructed candidate are also used as input to the BDT, while the of the candidate is used for the pp̅π^+π^- final state. The difference in the choice of input variables for thep p̅K^+K^- and the p p̅π^+π^- final states is due to different PID requirements applied to pions and kaons in the first stage of the offline selection. The optimised requirements on the BDT output and PID variables for → pp̅ (→ pp̅) decays retain ∼ 45 % (40 %) of the signal and reject more than 99% (99%) of the combinatorial background, inside the mass-fit ranges defined in Sec. <ref>. Dedicated BDT classifiers are trained to select candidate → 4h decays using the following set of input variables: the and the IP with respect to the SV of all reconstructed tracks; the vertex-fit χ^2 of theand ϕ candidates; the vertex-fit χ^2, the , the flight-distance significance with respect to the PV of the candidate, and the angle between the momentum and the vector joining the primary to the secondary vertex of the candidate. The optimised requirements on the BDT output and PID variables, for each of the 4h modes, retain about 50% of the signal and reject more than 99% of the combinatorial background inside the mass-fit ranges defined in Sec. <ref>.From simulation, after all requirements for → 4h decays, a significant contamination is expected from → 3h decays, where thedecays to ϕ and 3h is any combination of three charged kaons and pions. This background contribution has distributions similar to the signal in the 4hK^+K^- and K^+K^- invariant-mass spectra, while its distribution in the 4h invariant-mass spectrum is not expected to exhibit any peaking structure.In order to reduce this background contamination, the absolute difference between the known value of the mass <cit.> and the reconstructed invariant mass of the system formed by the combination of thecandidate and any signal candidate track consistent with a pion hypothesis is required to be > 17.This requirement is optimised using the significance of → candidates with respect to background contributions. This significance is stable for cut values in the range [9, 25]MeV/c^2, with a maximum at 17MeV/c^2, which removes about 90% of → 3h decays, with no significant signal loss.§ EFFICIENCY CORRECTIONThe efficiency correction factors appearing in Eq. <ref> are obtained from fully simulated events. Since the signal and normalisation channels are selected based on the same requirements and have the same final-state particles with very similar kinematic distributions, the ratio between the efficiency correction factors for → X and → X decays are expected to be close to unity. The efficiency correction factors include the geometrical acceptance of the LHCb detector, the reconstruction efficiency, the efficiency of the offline selection criteria, including the trigger and PID requirements. The efficiencies of the PID requirements are obtained as a function of particle momentum and number of charged tracks in the event using dedicated data-driven calibration samples of pions, kaons, and protons <cit.>. The overall efficiency is taken as the product of the geometrical acceptance of the LHCb detector, the reconstruction efficiency and the efficiency of the offline selection criteria. In addition, corrections are applied to account for different lifetime values used in simulation with respect to the known values for the decay channels considered. The effective lifetime for decays to ϕ () final state, being purely -even (-odd), is obtained from the known value of the decay width of the light (heavy) state <cit.>. The effective lifetime of →ϕ (→) decays is taken from Ref. <cit.>. The lifetime correction is obtained after reweighting the signal and normalisation simulation samples. The final efficiency correction factors, given in Table <ref>, are found to be compatible to unity as expected.§ FIT MODELSIn this section the fit models used for the measurement of the branching fractions are described, first the model used for → decays in Sec. <ref>, then the model used for →ϕ decays in Sec. <ref>.§.§ Model for → decaysCandidates are fitted in two stages. First, an extended UML fit to the pp̅π^+π^- invariant-mass spectrum is performed in the range 5150–5540, to discriminate → pp̅ events from combinatorial background, B^0→ p p̅ decays, and B^0→ p p̅ K π decays, where the kaon is misidentified as a pion. The p p̅ mass distribution of → p p̅ and B^0→ p p̅ candidates are described by Hypatia functions <cit.>.Both Hypatia functions share common core resolution and tail parameters. The latter are fixed to values obtained from simulation. The distribution of the misidentified B^0→ p p̅ K π background is described by a Crystal Ball function <cit.>, with mode, power-law tail, and core resolution parameters fixed to values obtained from simulation. The combinatorial background is modelled using an exponential function. The mode and the common core resolution parameters of the Hypatia functions and the slope of the exponential functions, as well as all the yields, are allowed to vary in the fit to data.Using the information from the fit to the pp̅ spectrum, signal weights are then computed and the background components are subtracted using the _s Plot technique <cit.>.Correlations between the pp̅ and pp̅ invariant-mass spectra, for both signal and backgrounds, are found to be negligible.Second, a UML fit to the weighted pp̅ invariant-mass distribution is performed in the mass range 2900–3200. In this region, three event categories are expected to populate the pp̅ spectrum: the and resonances, as well as a possible contribution from nonresonant → (pp̅)_ NR decays. The pp̅ mass distribution of candidates is described by the convolution of the square of the modulus of a complex relativistic Breit-Wigner function (RBW) with constant width and a function describing resolution effects.The expression of the RBW function is taken as R_ res(m;m_ res,Γ_ res) ∝1/m^2_ res - m^2 - i m_ resΓ_ res, where m_ res and Γ_ res are the pole mass and the natural width, respectively, of the resonance.From simulation, in the mass range considered, the pp̅ invariant-mass resolution is found to be a few , while Γ_ = 31.8±0.8 <cit.>.Thus, the pp̅ distribution of candidates is expected to be dominated by the RBW, with only small effects on the total lineshape from the resolution. On the other hand, due to the small natural width of the resonance <cit.>, the corresponding lineshape is assumed to be described to a very good approximation by the resolution function only. For the and lineshapes, Hypatia functions are used to parametrise the resolution, with tail parameters that are fixed to values obtained from simulation. A single core resolution parameter, σ_ res^cc̅, shared between these two functions, is free to vary in the fit to data. The pole mass and the mode of the Hypatia function describing the lineshape, which can be approximated by the pole mass of the resonance, are also free to vary, while the η_c natural width is constrained to its known value <cit.>. The possible contribution from → (pp̅)_ NR decays is parametrised by a constant.The angular distributions of P- and S-waves are characterised by a linear combination of odd- and even-order Legendre polynomials, respectively. In the case of a uniform acceptance, after integration over the helicity angles, the interference between the two waves vanishes.For a non-uniform acceptance, after integration, only residual effects from the interference between (→ pp̅) and (→ pp̅) amplitudes can arise in the pp̅ invariant mass spectra.Due to the limited size of the current data sample, these effects are assumed to be negligible. Also, given the sample size and the small expected contribution of the NR pp̅ component, interference between the (→ pp̅) and (pp̅)_ NR amplitudes is neglected. In order to fully exploit the correlation between the yields of and candidates, the former is parametrised in the fit, rearranging Eq. (<ref>), as N_ = N_×ℬ(→)/ℬ(→)×ℬ(→ pp̅) /ℬ(→ pp̅)×ε()_pp̅/ε()_pp̅ , where (→) and N_ are free parameters.The yield of the NR pp̅ component is also free to vary.§.§ Model for →ϕ decaysThe procedure and the fit model used to measure ℬ( →ϕ ) is based on that described in Sec. <ref>. However, several additional features are needed to describe the data, as detailed below.Theinvariant mass is added as a second dimension in the first step fit, which here consists of a two-dimensional (2D) fit to the pp̅ or 4h andinvariant mass spectra. This allows the contributions from ϕ→ decays and nonresonantpairs to be separated. Thus, the first step of the fitting procedure consists of four independent two-dimensional UML fits to the pp̅ versusand 4h versusinvariant-mass spectra in the ranges 5200–5500and 990–1050, respectively.[In order to better constrain the combinatorial background shape, the upper limit of the pp̅ invariant-mass range is extended to 5550.] Similar 2D fit models are used for each 4h mode.The 4h distributions of → 4h ϕ signal and → 4h ϕ background contributions, as well as those of → 4h and → 4h backgrounds, are described by Hypatia functions. The 4h distribution of the combinatorial background is parametrised using two exponential functions, one for when thepair arises from a random combination of two prompt kaons, and another for when thepair originates from the decay of a prompt ϕ meson. The K^+K^- distribution of each contribution including a ϕ in the final state is described by the square of the modulus of a RBW with mass-dependent width convolved with a Gaussian function accounting for resolution effects. The K^+K^- distributions of the contributions including a nonresonantpair are parametrised by linear functions. The expression of the RBW with mass-dependent width describing the ϕ resonance is the analogue of Eq. (<ref>), with the mass-dependent width given byΓ(m) = Γ_ϕ( q/q_ϕ)^3( m_ϕ/m) X^2(qr), where m_ϕ=1019.461±0.019, Γ_ϕ=4.266±0.031 <cit.>, and q is the magnitude of the momentum of one of the ϕ decay products, evaluated in the resonance rest frame such that q = 1/2√(m^2-4m_K^±^2). with m_K^± = 493.677 ± 0.016 <cit.>. The symbol q_ϕ denotes the value of q when m = m_ϕ. The X(qr) function is the Blatt-Weisskopf barrier factor <cit.> with a barrier radius of r.The value of the parameter r is fixed at 3(GeV/c)^-1. Defining the quantity z = qr, the Blatt-Weisskopf barrier function for a spin-1 resonance is given by X(z) = √(1+z_ϕ^2/1+z^2), where z_ϕ represents the value of z when m = m_ϕ.The same 2D fit model is used for the pp̅ mode with an additional component accounting for the presence of misidentified B^0→ p p̅ K π background events.The pp̅ anddistributions of B^0→ p p̅ K π candidates are described by a Crystal Ball function and a linear function, respectively. Using the sets of signal weights computed from the 2D fits, the pp̅ and 4h spectra are obtained after subtraction of background candidates fromdecays anddecays with nonresonantpairs as well as combinatorial background. Correlations between the invariant-mass spectra used in the 2D fits and the pp̅ or 4h spectrum are found to be negligible. A simultaneous UML fit is then performed to the weighted pp̅ and 4h invariant-mass distributions, with identical mass ranges of 2820–3170. Different models are used to describe the pp̅ and 4h spectra.The pp̅ invariant-mass spectrum is modelled similarly to the description in Sec. <ref>. However, as shown in Sec. <ref>, the fit to the pp̅ spectrum for → pp̅ decays yields a contribution of NR pp̅ decays compatible with zero.Thus, here, the contribution of such decays is fixed to zero and only considered as a source of systematic uncertainty, as described in Sec. <ref>.For the 4h modes, in addition to →ϕ and →ϕ decays, other contributions are expected in the mass range considered: → 4h ϕ decays, where the 4h system is in a nonresonant state with a total angular momentum equal to zero, and where decays proceed via intermediate resonant states decaying in turn into two or three particles for instance, → PP^'ϕ decays, where P and P^' could be any resonance such as (892), ρ(770), ϕ(1020), (782), _2(1270), '_2(1525) and _2(1320). Similarly to →3h decays, all these decays are expected to have smooth distributions in the 4h invariant-mass spectra.Therefore, lacking information from previous measurements, all these contributions are merged into one category, denoted (4h)_ bkg. The 4h nonresonant contribution is denoted (4h)_ NR. Thebeing a pseudoscalar particle, interference between →(→ 4h) ϕ and → (4h)_ NRϕ amplitudes for each 4h final state are accounted for in the model.On the other hand, given the large number of amplitudes contributing to the (4h)_ bkg event category, the net effect of all interference terms is assumed to cancel. Similarly to the pp̅ fit model, terms describing residual effects of the interference between the and the other fit components are neglected. The total amplitude for each of the 4h modes, integrated over the helicity angles, is then given by |A(m_f;c^f_k,𝐚)|^2 = ∑_k |c^f_k R_k(m_f;𝐚)|^2 + 2( c^f_R_(m_f;𝐚) c^f∗_ NRR^∗_ NR(m_f;𝐚) ), where R_k(m_f;𝐚) is the line-shape of the component k, 𝐚 represents the line-shape parameters, c_k^f are complex numbers such that c^f_k = α_k^fe^iφ_k^f where α_k^f and φ_k^f are the magnitude and the strong phase of amplitude k, and m_f is one of the 4h invariant masses.Theand theresonances are described similarly to the pp̅ mode, and the (4h)_ NR and (4h)_ bkg components are described using exponential functions.Finally, taking into account the detector resolution, the total function, ℱ_ tot, used to describe the invariant-mass spectra m_f is given by ℱ_ tot(m_f;c^f_k,𝐚,𝐚^') = |A(m_f;c^f_k,𝐚)|^2⊗ℛ(𝐚^' (m_f)) =ξ^f_ℱ_(m_f)/∫_m_fℱ_(m_f)dm_f + ξ^f_ℱ_(m_f)/∫_m_fℱ_(m_f)dm_f + ξ^f_ NRℱ_ NR(m_f)/∫_m_fℱ_ NR(m_f)dm_f + ξ^f_ bkgℱ_ bkg(m_f)/∫_m_fℱ_ bkg(m_f)dm_f + 2 √(ξ^f_ξ^f_ NR)ℱ_ I(m_f)/∫_m_f√(ℱ_(m_f)ℱ_ NR(m_f)) dm_f , with ξ^f_k = (α_k^f)^2 and where the expressions for ℱ_k(m_f) are ℱ_(m_f) = |R_(m_f;𝐚)|^2⊗ℛ(𝐚^' (m_f)) ,ℱ_(m_f) = ℛ(𝐚^' (m_f)) ,ℱ_ NR(m_f) =e^κ_ NR m_f⊗ℛ(𝐚^' (m_f)) ,ℱ_ bkg(m_f) =e^κ_ bkg m_f⊗ℛ(𝐚^' (m_f)) ,ℱ_ I(m_f) = ( e^κ_ NR m_f/2[R_(m_f;𝐚)e^iδφ] ) ⊗ℛ(𝐚^' (m_f)) , where δφ is the difference between the strong phases of (4h)_ NRϕ and (→ 4h) ϕ amplitudes.The integrals in Eq. (<ref>) are calculated over the mass range in which the fit is performed. Only the and components are used in the expression for ℱ_ tot(m_pp̅). The fit fractions FF_k measured for each component, as well as the interference fit fraction FF_ I between theand the NR amplitudes for the 4h modes, are calculated as: FF_k^f= ∫_m_fξ^f_kℱ_k(m_f)/ℱ_tot(m_f) ∫_m_fℱ_k(m_f)dm_f dm_f ,FF^f_ I = ∫_m_f2 √(ξ^f_ξ^f_ NR)ℱ_ I(m_f)/ℱ_tot(m_f)∫_m_f√(ℱ_(m_f)ℱ_ NR(m_f)) dm_f dm_f .The resolution, ℛ(𝐚^' (m_f)), is described by a Hypatia function, with parameters 𝐚^' (m_f) that depend on the final state and the invariant-mass region. They are estimated using dedicated simulation samples in two mass regions: a high-mass region around the resonance, and a low-mass region around the resonance.As in the model for → pp̅ decays, the branching fraction ℬ(→ϕ) is directly determined in the fit. In this configuration, the squared magnitudes of the amplitudes, ξ^f_, are parametrised asξ^f_ = ξ^f_×ℬ(→ϕ)/ℬ(→ϕ)×ℬ(→ f) /ℬ(→ f)×ε()_f/ε()_f .In the simultaneous fit to the pp̅ and 4h invariant-mass spectra several parameters are allowed to take different values depending on the final state: the intensities ξ^f_k (free to vary), the slopes κ_ bkg and κ_ NR of the (4h)_ bkg and (4h)_ NR exponentials, respectively, (free to vary), the relative strong phase between the (4h)_ NR and amplitudes (free to vary) as well as the low and high mass resolution parameters (fixed). The pole mass, the mode of the Hypatia function describing the and the branching fraction ℬ(→ϕ) are common parameters across all final states and are free to vary in the fit.The width is fixed to the world average value taken from Ref. <cit.>. For each mode, ξ_ and φ_ are fixed as reference to 1 and 0, respectively. § RESULTS The yields of the various decay modes determined by the UML fit to the pp̅ invariant mass distribution, and from the 2D fits to the pp̅(4h) versusinvariant mass planes, are summarised in Table <ref>. The mass distributions and the fit projections are shown in Appendix <ref>. The pp̅ and 2D fit models are validated using large samples of pseudoexperiments, from which no significant bias is observed. The pp̅ invariant-mass distribution for → pp̅ candidates, and the projection of the fit are shown in Fig. <ref>.The values of the and shape parameters as well as the yields are given in Table <ref>.The branching fraction for the → decay mode is found to beℬ (B^0_s→η_cπ^+π^-) =(1.76 ± 0.59 ± 0.12 ± 0.29) × 10^-4 , where the two first uncertainties are statistical and systematic,respectively, and the third uncertainty is due to the limitedknowledge of the external branching fractions. The systematicuncertainties on the branching fraction are discussed inSec. <ref>. The significance of the presence of→ decays in the pp̅ invariant-massspectrum is estimated, as √(-2 Δln L), from thedifference between the log-likelihood (ln L) values for N_ = 0 and the valueof N_ that minimises ln L. For the estimation of the significance, N_ is not parametrised as a function of (→), but is a free parameter in the fit. As shown in Fig. <ref>, the significance of the component in the fit to the pp̅ invariant-mass distribution is 5.0 standard deviations (σ) with statistical uncertainties and 4.6σ when including systematic uncertainties. The latter is obtained by adding Gaussian constraints to the likelihood function. This result is the first evidence for → decays.The pp̅ and 4h invariant-mass distributions for → pp̅ϕ and → 4hϕ candidates, and the projection of the simultaneous fit are shown in Fig. <ref>.The values of the shape parameters, of the magnitudes and of the relative strong phases are given in Table <ref>.The statistical correlation matrix of the simultaneous fit is given in Appendix <ref>.The fit fractions are given in Table <ref>. The measured branching fraction for the →ϕ decay mode is ℬ (B^0_s→η_cϕ) = (5.01 ± 0.53 ± 0.27 ± 0.63) × 10^-4 , where the two first uncertainties are statistical and systematic, respectively, and the third uncertainty is due to the limited knowledge of the external branching fractions. This measurement corresponds to the first observation of →ϕ decays.As a cross-check, individual fits to the pp̅ and to each of the 4h invariant-mass spectra give compatible values of ℬ (B^0_s→η_cϕ) within statistical uncertainties. The precision of the ℬ (B^0_s→η_cϕ) measurement obtained using each of the 4h modes is limited compared to the pp̅ mode. This is expected due to the presence of additional components below the and resonance in the 4h invariant-mass spectra, and due to the interference between →(→ 4h)ϕ and → (4h)_ NRϕ amplitudes. The measurement of ℬ (B^0_s→η_cϕ) from the simultaneous fit is largely dominated by the pp̅ mode.§ SYSTEMATIC UNCERTAINTIESAs the expressions for ℬ(→) and ℬ(→ϕ) are based on the ratios of observed quantities, only sources of systematic uncertainties inducing different biases to the number of observed and candidates are considered.The dominant source of systematic uncertainties is due to the knowledge of the external branching fractions.These are estimated by adding Gaussian constraints on the external branching fractions in the fits, with widths corresponding to their known uncertainties <cit.>.A summary of the systematic uncertainties can be found in Table <ref>. To assign systematic uncertainties due to fixing of PDF parameters, the fits are repeated by varying all of them simultaneously.The resolution parameters, estimated from simulation, are varied according to normal distributions, taking into account the correlations between the parameters and with variances related to the size of the simulated samples. The external parameters are varied within a normal distribution of mean and width fixed to their known values and uncertainties <cit.>. This procedure is repeated 1000 times, and for each iteration a new value of the branching fraction is obtained. The systematic uncertainties on the branching fraction are taken from the variance of the corresponding distributions. The systematic uncertainty due to the fixing of the values of the efficiencies is estimated by adding Gaussian constraints to the likelihood functions, with widths that are taken from the uncertainties quoted in Table <ref>.The presence of intrinsic biases in the fit models is studied using parametric simulation.For this study, 1000 pseudoexperiments are generated and fitted using the nominal PDFs, where the generated parameter values correspond to those obtained in the fits to data.The biases on the branching fractions are then calculated as the difference between the generated values and the mean of the distribution of the fitted branching fraction values. To assign a systematic uncertainty from the model used to describe the detector resolution, the fits are repeated for each step replacing the Hypatia functions by bifurcated Crystal Ball functions, the parameters of which are obtained from simulation.The difference from the nominal branching fraction result is assigned as a systematic uncertainty. The Blatt-Weisskopf parameter r of the ϕ is arbitrarily set to 3()^-1. To assign a systematic uncertainty due to the fixed value of this r parameter, the fits are repeated for different values taken in the range 1.5–5.0()^-1. The maximum differences from the nominal branching fraction result are assigned as systematic uncertainties.To assign a systematic uncertainty due to the assumption of a uniform acceptance, the simultaneous fit is repeated after correcting the 4h invariant-mass distributions for acceptance effects.A histogram describing the acceptance effects in each of the 4h invariant-mass spectra is constructed from the ratio of the normalised 4h invariant-mass distributions taken from simulated samples of → (4h) ϕ phase space decays, obtained either directly from , or after processing through the full simulation chain.The simultaneous fit is repeated after applying weights for each event from the central value of its bin in the 4h invariant-mass distribution. The difference from the nominal branching fraction result is assigned as a systematic uncertainty.No significant dependence on the binning choice was observed.The systematic uncertainty due to neglecting the presence of a nonresonant pp̅ contribution in the pp̅ spectrum for → pp̅ϕ candidates is estimated by repeating the simultaneous fit with an additional component described by an exponential function, where the slope and the yield are allowed to vary. The difference from the nominal branching fraction result is assigned as a systematic uncertainty. § CONCLUSIONSThis paper reports the observation of →ϕ decays and the first evidence for → decays.The branching fractions are measured to be ℬ (B^0_s→η_cϕ) = (5.01 ± 0.53 ± 0.27 ± 0.63 ) × 10^-4 ,ℬ (B^0_s→η_cπ^+ π^-) = (1.76 ± 0.59 ± 0.12 ± 0.29 ) × 10^-4 , where in each case the two first uncertainties are statistical and systematic, respectively, and the third uncertainties are due to the limited knowledge of the external branching fractions. The significance of the → decay mode, including systematic uncertainties, is 4.6σ. The results for (→) and (→ϕ) are in agreement with expectations based on Eqs. (<ref>), (<ref>) and (<ref>).The data sample recorded by the LHCb experiment in Run 1 of the LHC is not sufficiently large to allow a measurement of the -violating phase from time-dependent analysis of →ϕ or → decays. However, in the future with significant improvement of the hadronic trigger efficiencies <cit.>, these decay modes may become of interest to add sensitivity to the measurement of . § ACKNOWLEDGEMENTSWe express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); FOM and NWO (The Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MinES and FASO (Russia); MinECo (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (The Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany), EPLANET, Marie Skłodowska-Curie Actions and ERC (European Union), Conseil Général de Haute-Savoie, Labex ENIGMASS and OCEVU, Région Auvergne (France), RFBR and Yandex LLC (Russia), GVA, XuntaGal and GENCAT (Spain), Herchel Smith Fund, The Royal Society, Royal Commission for the Exhibition of 1851 and the Leverhulme Trust (United Kingdom). Appendix § FIT PROJECTIONS The pp̅ invariant mass distribution and the fit projection are shown in Fig. <ref>. The four pp̅(4h) andinvariant-mass distributions and the corresponding two-dimensional fit projections are shown in Figs. <ref> to <ref>.§ CORRELATION MATRIX The statistical correlation matrix for the simultaneous fit to the pp̅ and 4h invariant-mass distributions for → pp̅ϕ and → 4hϕ candidates is given in Table <ref>.tocsectionReferences inbibliographytrue LHCbLHCb collaboration R. Aaij^40, B. Adeva^39, M. Adinolfi^48, Z. Ajaltouni^5, S. Akar^59, J. Albrecht^10, F. Alessio^40, M. Alexander^53, S. Ali^43, G. Alkhazov^31, P. Alvarez Cartelle^55, A.A. Alves Jr^59, S. Amato^2, S. Amerio^23, Y. Amhis^7, L. An^3, L. Anderlini^18, G. Andreassi^41, M. Andreotti^17,g, J.E. Andrews^60, R.B. Appleby^56, F. Archilli^43, P. d'Argent^12, J. Arnau Romeu^6, A. Artamonov^37, M. Artuso^61, E. Aslanides^6, G. Auriemma^26, M. Baalouch^5, I. Babuschkin^56, S. Bachmann^12, J.J. Back^50, A. Badalov^38, C. Baesso^62, S. Baker^55, V. Balagura^7,c, W. Baldini^17, R.J. Barlow^56, C. Barschel^40, S. Barsuk^7, W. Barter^56, F. Baryshnikov^32, M. Baszczyk^27, V. Batozskaya^29, B. Batsukh^61, V. Battista^41, A. Bay^41, L. Beaucourt^4, J. Beddow^53, F. Bedeschi^24, I. Bediaga^1, L.J. Bel^43, V. Bellee^41, N. Belloli^21,i, K. Belous^37, I. Belyaev^32, E. Ben-Haim^8, G. Bencivenni^19, S. Benson^43, A. Berezhnoy^33, R. Bernet^42, A. Bertolin^23, C. Betancourt^42, F. Betti^15, M.-O. Bettler^40, M. van Beuzekom^43, Ia. Bezshyiko^42, S. Bifani^47, P. Billoir^8, T. Bird^56, A. Birnkraut^10, A. Bitadze^56, A. Bizzeti^18,u, T. Blake^50, F. Blanc^41, J. Blouw^11,†, S. Blusk^61, V. Bocci^26, T. Boettcher^58, A. Bondar^36,w, N. Bondar^31,40, W. Bonivento^16, I. Bordyuzhin^32, A. Borgheresi^21,i, S. Borghi^56, M. Borisyak^35, M. Borsato^39, F. Bossu^7, M. Boubdir^9, T.J.V. Bowcock^54, E. Bowen^42, C. Bozzi^17,40, S. Braun^12, M. Britsch^12, T. Britton^61, J. Brodzicka^56, E. Buchanan^48, C. Burr^56, A. Bursche^2, J. Buytaert^40, S. Cadeddu^16, R. Calabrese^17,g, M. Calvi^21,i, M. Calvo Gomez^38,m, A. Camboni^38, P. Campana^19, D.H. Campora Perez^40, L. Capriotti^56, A. Carbone^15,e, G. Carboni^25,j, R. Cardinale^20,h, A. Cardini^16, P. Carniti^21,i, L. Carson^52, K. Carvalho Akiba^2, G. Casse^54, L. Cassina^21,i, L. Castillo Garcia^41, M. Cattaneo^40, G. Cavallero^20, R. Cenci^24,t, D. Chamont^7, M. Charles^8, Ph. Charpentier^40, G. Chatzikonstantinidis^47, M. Chefdeville^4, S. Chen^56, S.-F. Cheung^57, V. Chobanova^39, M. Chrzaszcz^42,27, X. Cid Vidal^39, G. Ciezarek^43, P.E.L. Clarke^52, M. Clemencic^40, H.V. Cliff^49, J. Closier^40, V. Coco^59, J. Cogan^6, E. Cogneras^5, V. Cogoni^16,40,f, L. Cojocariu^30, P. Collins^40, A. Comerma-Montells^12, A. Contu^40, A. Cook^48, G. Coombs^40, S. Coquereau^38, G. Corti^40, M. Corvo^17,g, C.M. Costa Sobral^50, B. Couturier^40, G.A. Cowan^52, D.C. Craik^52, A. Crocombe^50, M. Cruz Torres^62, S. Cunliffe^55, R. Currie^55, C. D'Ambrosio^40, F. Da Cunha Marinho^2, E. Dall'Occo^43, J. Dalseno^48, P.N.Y. David^43, A. Davis^3, K. De Bruyn^6, S. De Capua^56, M. De Cian^12, J.M. De Miranda^1, L. De Paula^2, M. De Serio^14,d, P. De Simone^19, C.T. Dean^53, D. Decamp^4, M. Deckenhoff^10, L. Del Buono^8, M. Demmer^10, A. Dendek^28, D. Derkach^35, O. Deschamps^5, F. Dettori^40, B. Dey^22, A. Di Canto^40, H. Dijkstra^40, F. Dordei^40, M. Dorigo^41, A. Dosil Suárez^39, A. Dovbnya^45, K. Dreimanis^54, L. Dufour^43, G. Dujany^56, K. Dungs^40, P. Durante^40, R. Dzhelyadin^37, A. Dziurda^40, A. Dzyuba^31, N. Déléage^4, S. Easo^51, M. Ebert^52, U. Egede^55, V. Egorychev^32, S. Eidelman^36,w, S. Eisenhardt^52, U. Eitschberger^10, R. Ekelhof^10, L. Eklund^53, S. Ely^61, S. Esen^12, H.M. Evans^49, T. Evans^57, A. Falabella^15, N. Farley^47, S. Farry^54, R. Fay^54, D. Fazzini^21,i, D. Ferguson^52, A. Fernandez Prieto^39, F. Ferrari^15,40, F. Ferreira Rodrigues^2, M. Ferro-Luzzi^40, S. Filippov^34, R.A. Fini^14, M. Fiore^17,g, M. Fiorini^17,g, M. Firlej^28, C. Fitzpatrick^41, T. Fiutowski^28, F. Fleuret^7,b, K. Fohl^40, M. Fontana^16,40, F. Fontanelli^20,h, D.C. Forshaw^61, R. Forty^40, V. Franco Lima^54, M. Frank^40, C. Frei^40, J. Fu^22,q, W. Funk^40, E. Furfaro^25,j, C. Färber^40, A. Gallas Torreira^39, D. Galli^15,e, S. Gallorini^23, S. Gambetta^52, M. Gandelman^2, P. Gandini^57, Y. Gao^3, L.M. Garcia Martin^69, J. García Pardiñas^39, J. Garra Tico^49, L. Garrido^38, P.J. Garsed^49, D. Gascon^38, C. Gaspar^40, L. Gavardi^10, G. Gazzoni^5, D. Gerick^12, E. Gersabeck^12, M. Gersabeck^56, T. Gershon^50, Ph. Ghez^4, S. Gianì^41, V. Gibson^49, O.G. Girard^41, L. Giubega^30, K. Gizdov^52, V.V. Gligorov^8, D. Golubkov^32, A. Golutvin^55,40, A. Gomes^1,a, I.V. Gorelov^33, C. Gotti^21,i, R. Graciani Diaz^38, L.A. Granado Cardoso^40, E. Graugés^38, E. Graverini^42, G. Graziani^18, A. Grecu^30, P. Griffith^16, L. Grillo^21,40,i, B.R. Gruberg Cazon^57, O. Grünberg^67, E. Gushchin^34, Yu. Guz^37, T. Gys^40, C. Göbel^62, T. Hadavizadeh^57, C. Hadjivasiliou^5, G. Haefeli^41, C. Haen^40, S.C. Haines^49, B. Hamilton^60, X. Han^12, S. Hansmann-Menzemer^12, N. Harnew^57, S.T. Harnew^48, J. Harrison^56, M. Hatch^40, J. He^63, T. Head^41, A. Heister^9, K. Hennessy^54, P. Henrard^5, L. Henry^8, E. van Herwijnen^40, M. Heß^67, A. Hicheur^2, D. Hill^57, C. Hombach^56, H. Hopchev^41, W. Hulsbergen^43, T. Humair^55, M. Hushchyn^35, D. Hutchcroft^54, M. Idzik^28, P. Ilten^58, R. Jacobsson^40, A. Jaeger^12, J. Jalocha^57, E. Jans^43, A. Jawahery^60, F. Jiang^3, M. John^57, D. Johnson^40, C.R. Jones^49, C. Joram^40, B. Jost^40, N. Jurik^57, S. Kandybei^45, M. Karacson^40, J.M. Kariuki^48, S. Karodia^53, M. Kecke^12, M. Kelsey^61, M. Kenzie^49, T. Ketel^44, E. Khairullin^35, B. Khanji^12, C. Khurewathanakul^41, T. Kirn^9, S. Klaver^56, K. Klimaszewski^29, S. Koliiev^46, M. Kolpin^12, I. Komarov^41, R.F. Koopman^44, P. Koppenburg^43, A. Kosmyntseva^32, A. Kozachuk^33, M. Kozeiha^5, L. Kravchuk^34, K. Kreplin^12, M. Kreps^50, P. Krokovny^36,w, F. Kruse^10, W. Krzemien^29, W. Kucewicz^27,l, M. Kucharczyk^27, V. Kudryavtsev^36,w, A.K. Kuonen^41, K. Kurek^29, T. Kvaratskheliya^32,40, D. Lacarrere^40, G. Lafferty^56, A. Lai^16, G. Lanfranchi^19, C. Langenbruch^9, T. Latham^50, C. Lazzeroni^47, R. Le Gac^6, J. van Leerdam^43, A. Leflat^33,40, J. Lefrançois^7, R. Lefèvre^5, F. Lemaitre^40, E. Lemos Cid^39, O. Leroy^6, T. Lesiak^27, B. Leverington^12, T. Li^3, Y. Li^7, T. Likhomanenko^35,68, R. Lindner^40, C. Linn^40, F. Lionetto^42, X. Liu^3, D. Loh^50, I. Longstaff^53, J.H. Lopes^2, D. Lucchesi^23,o, M. Lucio Martinez^39, H. Luo^52, A. Lupato^23, E. Luppi^17,g, O. Lupton^40, A. Lusiani^24, X. Lyu^63, F. Machefert^7, F. Maciuc^30, O. Maev^31, K. Maguire^56, S. Malde^57, A. Malinin^68, T. Maltsev^36, G. Manca^16,f, G. Mancinelli^6, P. Manning^61, J. Maratas^5,v, J.F. Marchand^4, U. Marconi^15, C. Marin Benito^38, M. Marinangeli^41, P. Marino^24,t, J. Marks^12, G. Martellotti^26, M. Martin^6, M. Martinelli^41, D. Martinez Santos^39, F. Martinez Vidal^69, D. Martins Tostes^2, L.M. Massacrier^7, A. Massafferri^1, R. Matev^40, A. Mathad^50, Z. Mathe^40, C. Matteuzzi^21, A. Mauri^42, E. Maurice^7,b, B. Maurin^41, A. Mazurov^47, M. McCann^55,40, A. McNab^56, R. McNulty^13, B. Meadows^59, F. Meier^10, M. Meissner^12, D. Melnychuk^29, M. Merk^43, A. Merli^22,q, E. Michielin^23, D.A. Milanes^66, M.-N. Minard^4, D.S. Mitzel^12, A. Mogini^8, J. Molina Rodriguez^1, I.A. Monroy^66, S. Monteil^5, M. Morandin^23, P. Morawski^28, A. Mordà^6, M.J. Morello^24,t, O. Morgunova^68, J. Moron^28, A.B. Morris^52, R. Mountain^61, F. Muheim^52, M. Mulder^43, M. Mussini^15, D. Müller^56, J. Müller^10, K. Müller^42, V. Müller^10, P. Naik^48, T. Nakada^41, R. Nandakumar^51, A. Nandi^57, I. Nasteva^2, M. Needham^52, N. Neri^22, S. Neubert^12, N. Neufeld^40, M. Neuner^12, T.D. Nguyen^41, C. Nguyen-Mau^41,n, S. Nieswand^9, R. Niet^10, N. Nikitin^33, T. Nikodem^12, A. Nogay^68, A. Novoselov^37, D.P. O'Hanlon^50, A. Oblakowska-Mucha^28, V. Obraztsov^37, S. Ogilvy^19, R. Oldeman^16,f, C.J.G. Onderwater^70, J.M. Otalora Goicochea^2, A. Otto^40, P. Owen^42, A. Oyanguren^69, P.R. Pais^41, A. Palano^14,d, M. Palutan^19, A. Papanestis^51, M. Pappagallo^14,d, L.L. Pappalardo^17,g, W. Parker^60, C. Parkes^56, G. Passaleva^18, A. Pastore^14,d, G.D. Patel^54, M. Patel^55, C. Patrignani^15,e, A. Pearce^40, A. Pellegrino^43, G. Penso^26, M. Pepe Altarelli^40, S. Perazzini^40, P. Perret^5, L. Pescatore^41, K. Petridis^48, A. Petrolini^20,h, A. Petrov^68, M. Petruzzo^22,q, E. Picatoste Olloqui^38, B. Pietrzyk^4, M. Pikies^27, D. Pinci^26, A. Pistone^20, A. Piucci^12, V. Placinta^30, S. Playfer^52, M. Plo Casasus^39, T. Poikela^40, F. Polci^8, A. Poluektov^50,36, I. Polyakov^61, E. Polycarpo^2, G.J. Pomery^48, A. Popov^37, D. Popov^11,40, B. Popovici^30, S. Poslavskii^37, C. Potterat^2, E. Price^48, J.D. Price^54, J. Prisciandaro^39,40, A. Pritchard^54, C. Prouve^48, V. Pugatch^46, A. Puig Navarro^42, G. Punzi^24,p, W. Qian^50, R. Quagliani^7,48, B. Rachwal^27, J.H. Rademacker^48, M. Rama^24, M. Ramos Pernas^39, M.S. Rangel^2, I. Raniuk^45,†, F. Ratnikov^35, G. Raven^44, F. Redi^55, S. Reichert^10, A.C. dos Reis^1, C. Remon Alepuz^69, V. Renaudin^7, S. Ricciardi^51, S. Richards^48, M. Rihl^40, K. Rinnert^54, V. Rives Molina^38, P. Robbe^7,40, A.B. Rodrigues^1, E. Rodrigues^59, J.A. Rodriguez Lopez^66, P. Rodriguez Perez^56,†, A. Rogozhnikov^35, S. Roiser^40, A. Rollings^57, V. Romanovskiy^37, A. Romero Vidal^39, J.W. Ronayne^13, M. Rotondo^19, M.S. Rudolph^61, T. Ruf^40, P. Ruiz Valls^69, J.J. Saborido Silva^39, E. Sadykhov^32, N. Sagidova^31, B. Saitta^16,f, V. Salustino Guimaraes^1, C. Sanchez Mayordomo^69, B. Sanmartin Sedes^39, R. Santacesaria^26, C. Santamarina Rios^39, M. Santimaria^19, E. Santovetti^25,j, A. Sarti^19,k, C. Satriano^26,s, A. Satta^25, D.M. Saunders^48, D. Savrina^32,33, S. Schael^9, M. Schellenberg^10, M. Schiller^53, H. Schindler^40, M. Schlupp^10, M. Schmelling^11, T. Schmelzer^10, B. Schmidt^40, O. Schneider^41, A. Schopper^40, K. Schubert^10, M. Schubiger^41, M.-H. Schune^7, R. Schwemmer^40, B. Sciascia^19, A. Sciubba^26,k, A. Semennikov^32, A. Sergi^47, N. Serra^42, J. Serrano^6, L. Sestini^23, P. Seyfert^21, M. Shapkin^37, I. Shapoval^45, Y. Shcheglov^31, T. Shears^54, L. Shekhtman^36,w, V. Shevchenko^68, B.G. Siddi^17,40, R. Silva Coutinho^42, L. Silva de Oliveira^2, G. Simi^23,o, S. Simone^14,d, M. Sirendi^49, N. Skidmore^48, T. Skwarnicki^61, E. Smith^55, I.T. Smith^52, J. Smith^49, M. Smith^55, H. Snoek^43, l. Soares Lavra^1, M.D. Sokoloff^59, F.J.P. Soler^53, B. Souza De Paula^2, B. Spaan^10, P. Spradlin^53, S. Sridharan^40, F. Stagni^40, M. Stahl^12, S. Stahl^40, P. Stefko^41, S. Stefkova^55, O. Steinkamp^42, S. Stemmle^12, O. Stenyakin^37, H. Stevens^10, S. Stevenson^57, S. Stoica^30, S. Stone^61, B. Storaci^42, S. Stracka^24,p, M. Straticiuc^30, U. Straumann^42, L. Sun^64, W. Sutcliffe^55, K. Swientek^28, V. Syropoulos^44, M. Szczekowski^29, T. Szumlak^28, S. T'Jampens^4, A. Tayduganov^6, T. Tekampe^10, G. Tellarini^17,g, F. Teubert^40, E. Thomas^40, J. van Tilburg^43, M.J. Tilley^55, V. Tisserand^4, M. Tobin^41, S. Tolk^49, L. Tomassetti^17,g, D. Tonelli^40, S. Topp-Joergensen^57, F. Toriello^61, E. Tournefier^4, S. Tourneur^41, K. Trabelsi^41, M. Traill^53, M.T. Tran^41, M. Tresch^42, A. Trisovic^40, A. Tsaregorodtsev^6, P. Tsopelas^43, A. Tully^49, N. Tuning^43, A. Ukleja^29, A. Ustyuzhanin^35, U. Uwer^12, C. Vacca^16,f, V. Vagnoni^15,40, A. Valassi^40, S. Valat^40, G. Valenti^15, R. Vazquez Gomez^19, P. Vazquez Regueiro^39, S. Vecchi^17, M. van Veghel^43, J.J. Velthuis^48, M. Veltri^18,r, G. Veneziano^57, A. Venkateswaran^61, M. Vernet^5, M. Vesterinen^12, J.V. Viana Barbosa^40, B. Viaud^7, D.  Vieira^63, M. Vieites Diaz^39, H. Viemann^67, X. Vilasis-Cardona^38,m, M. Vitti^49, V. Volkov^33, A. Vollhardt^42, B. Voneki^40, A. Vorobyev^31, V. Vorobyev^36,w, C. Voß^9, J.A. de Vries^43, C. Vázquez Sierra^39, R. Waldi^67, C. Wallace^50, R. Wallace^13, J. Walsh^24, J. Wang^61, D.R. Ward^49, H.M. Wark^54, N.K. Watson^47, D. Websdale^55, A. Weiden^42, M. Whitehead^40, J. Wicht^50, G. Wilkinson^57,40, M. Wilkinson^61, M. Williams^40, M.P. Williams^47, M. Williams^58, T. Williams^47, F.F. Wilson^51, J. Wimberley^60, J. Wishahi^10, W. Wislicki^29, M. Witek^27, G. Wormser^7, S.A. Wotton^49, K. Wraight^53, K. Wyllie^40, Y. Xie^65, Z. Xing^61, Z. Xu^4, Z. Yang^3, Y. Yao^61, H. Yin^65, J. Yu^65, X. Yuan^36,w, O. Yushchenko^37, K.A. Zarebski^47, M. Zavertyaev^11,c, L. Zhang^3, Y. Zhang^7, A. Zhelezov^12, Y. Zheng^63, X. Zhu^3, V. Zhukov^33, S. Zucchelli^15.^1Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil^2Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil^3Center for High Energy Physics, Tsinghua University, Beijing, China^4LAPP, Université Savoie Mont-Blanc, CNRS/IN2P3, Annecy-Le-Vieux, France^5Clermont Université, Université Blaise Pascal, CNRS/IN2P3, LPC, Clermont-Ferrand, France^6CPPM, Aix-Marseille Université, CNRS/IN2P3, Marseille, France^7LAL, Université Paris-Sud, CNRS/IN2P3, Orsay, France^8LPNHE, Université Pierre et Marie Curie, Université Paris Diderot, CNRS/IN2P3, Paris, France^9I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany^10Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany^11Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany^12Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany^13School of Physics, University College Dublin, Dublin, Ireland^14Sezione INFN di Bari, Bari, Italy^15Sezione INFN di Bologna, Bologna, Italy^16Sezione INFN di Cagliari, Cagliari, Italy^17Sezione INFN di Ferrara, Ferrara, Italy^18Sezione INFN di Firenze, Firenze, Italy^19Laboratori Nazionali dell'INFN di Frascati, Frascati, Italy^20Sezione INFN di Genova, Genova, Italy^21Sezione INFN di Milano Bicocca, Milano, Italy^22Sezione INFN di Milano, Milano, Italy^23Sezione INFN di Padova, Padova, Italy^24Sezione INFN di Pisa, Pisa, Italy^25Sezione INFN di Roma Tor Vergata, Roma, Italy^26Sezione INFN di Roma La Sapienza, Roma, Italy^27Henryk Niewodniczanski Institute of Nuclear PhysicsPolish Academy of Sciences, Kraków, Poland^28AGH - University of Science and Technology, Faculty of Physics and Applied Computer Science, Kraków, Poland^29National Center for Nuclear Research (NCBJ), Warsaw, Poland^30Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania^31Petersburg Nuclear Physics Institute (PNPI), Gatchina, Russia^32Institute of Theoretical and Experimental Physics (ITEP), Moscow, Russia^33Institute of Nuclear Physics, Moscow State University (SINP MSU), Moscow, Russia^34Institute for Nuclear Research of the Russian Academy of Sciences (INR RAN), Moscow, Russia^35Yandex School of Data Analysis, Moscow, Russia^36Budker Institute of Nuclear Physics (SB RAS), Novosibirsk, Russia^37Institute for High Energy Physics (IHEP), Protvino, Russia^38ICCUB, Universitat de Barcelona, Barcelona, Spain^39Universidad de Santiago de Compostela, Santiago de Compostela, Spain^40European Organization for Nuclear Research (CERN), Geneva, Switzerland^41Institute of Physics, Ecole PolytechniqueFédérale de Lausanne (EPFL), Lausanne, Switzerland^42Physik-Institut, Universität Zürich, Zürich, Switzerland^43Nikhef National Institute for Subatomic Physics, Amsterdam, The Netherlands^44Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, The Netherlands^45NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine^46Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine^47University of Birmingham, Birmingham, United Kingdom^48H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom^49Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom^50Department of Physics, University of Warwick, Coventry, United Kingdom^51STFC Rutherford Appleton Laboratory, Didcot, United Kingdom^52School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom^53School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom^54Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom^55Imperial College London, London, United Kingdom^56School of Physics and Astronomy, University of Manchester, Manchester, United Kingdom^57Department of Physics, University of Oxford, Oxford, United Kingdom^58Massachusetts Institute of Technology, Cambridge, MA, United States^59University of Cincinnati, Cincinnati, OH, United States^60University of Maryland, College Park, MD, United States^61Syracuse University, Syracuse, NY, United States^62Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to ^2^63University of Chinese Academy of Sciences, Beijing, China, associated to ^3^64School of Physics and Technology, Wuhan University, Wuhan, China, associated to ^3^65Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China, associated to ^3^66Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to ^8^67Institut für Physik, Universität Rostock, Rostock, Germany, associated to ^12^68National Research Centre Kurchatov Institute, Moscow, Russia, associated to ^32^69Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain, associated to ^38^70Van Swinderen Institute, University of Groningen, Groningen, The Netherlands, associated to ^43^aUniversidade Federal do Triângulo Mineiro (UFTM), Uberaba-MG, Brazil^bLaboratoire Leprince-Ringuet, Palaiseau, France^cP.N. Lebedev Physical Institute, Russian Academy of Science (LPI RAS), Moscow, Russia^dUniversità di Bari, Bari, Italy^eUniversità di Bologna, Bologna, Italy^fUniversità di Cagliari, Cagliari, Italy^gUniversità di Ferrara, Ferrara, Italy^hUniversità di Genova, Genova, Italy^iUniversità di Milano Bicocca, Milano, Italy^jUniversità di Roma Tor Vergata, Roma, Italy^kUniversità di Roma La Sapienza, Roma, Italy^lAGH - University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications, Kraków, Poland^mLIFAELS, La Salle, Universitat Ramon Llull, Barcelona, Spain^nHanoi University of Science, Hanoi, Viet Nam^oUniversità di Padova, Padova, Italy^pUniversità di Pisa, Pisa, Italy^qUniversità degli Studi di Milano, Milano, Italy^rUniversità di Urbino, Urbino, Italy^sUniversità della Basilicata, Potenza, Italy^tScuola Normale Superiore, Pisa, Italy^uUniversità di Modena e Reggio Emilia, Modena, Italy^vIligan Institute of Technology (IIT), Iligan, Philippines^wNovosibirsk State University, Novosibirsk, Russia^†Deceased
http://arxiv.org/abs/1702.08048v2
{ "authors": [ "LHCb collaboration", "R. Aaij", "B. Adeva", "M. Adinolfi", "Z. Ajaltouni", "S. Akar", "J. Albrecht", "F. Alessio", "M. Alexander", "S. Ali", "G. Alkhazov", "P. Alvarez Cartelle", "A. A. Alves Jr", "S. Amato", "S. Amerio", "Y. Amhis", "L. An", "L. Anderlini", "G. Andreassi", "M. Andreotti", "J. E. Andrews", "R. B. Appleby", "F. Archilli", "P. d'Argent", "J. Arnau Romeu", "A. Artamonov", "M. Artuso", "E. Aslanides", "G. Auriemma", "M. Baalouch", "I. Babuschkin", "S. Bachmann", "J. J. Back", "A. Badalov", "C. Baesso", "S. Baker", "V. Balagura", "W. Baldini", "R. J. Barlow", "C. Barschel", "S. Barsuk", "W. Barter", "F. Baryshnikov", "M. Baszczyk", "V. Batozskaya", "B. Batsukh", "V. Battista", "A. Bay", "L. Beaucourt", "J. Beddow", "F. Bedeschi", "I. Bediaga", "L. J. Bel", "V. Bellee", "N. Belloli", "K. Belous", "I. Belyaev", "E. Ben-Haim", "G. Bencivenni", "S. Benson", "A. Berezhnoy", "R. Bernet", "A. Bertolin", "C. Betancourt", "F. Betti", "M. -O. Bettler", "M. van Beuzekom", "Ia. Bezshyiko", "S. Bifani", "P. Billoir", "T. Bird", "A. Birnkraut", "A. Bitadze", "A. Bizzeti", "T. Blake", "F. Blanc", "J. Blouw", "S. Blusk", "V. Bocci", "T. Boettcher", "A. Bondar", "N. Bondar", "W. Bonivento", "I. Bordyuzhin", "A. Borgheresi", "S. Borghi", "M. Borisyak", "M. Borsato", "F. Bossu", "M. Boubdir", "T. J. V. Bowcock", "E. Bowen", "C. Bozzi", "S. Braun", "M. Britsch", "T. Britton", "J. Brodzicka", "E. Buchanan", "C. Burr", "A. Bursche", "J. Buytaert", "S. Cadeddu", "R. Calabrese", "M. Calvi", "M. Calvo Gomez", "A. Camboni", "P. Campana", "D. H. Campora Perez", "L. Capriotti", "A. Carbone", "G. Carboni", "R. Cardinale", "A. Cardini", "P. Carniti", "L. Carson", "K. Carvalho Akiba", "G. Casse", "L. Cassina", "L. Castillo Garcia", "M. Cattaneo", "G. Cavallero", "R. Cenci", "D. Chamont", "M. Charles", "Ph. Charpentier", "G. Chatzikonstantinidis", "M. Chefdeville", "S. Chen", "S. -F. Cheung", "V. Chobanova", "M. Chrzaszcz", "X. Cid Vidal", "G. Ciezarek", "P. E. L. Clarke", "M. Clemencic", "H. V. Cliff", "J. Closier", "V. Coco", "J. Cogan", "E. Cogneras", "V. Cogoni", "L. Cojocariu", "P. Collins", "A. Comerma-Montells", "A. Contu", "A. Cook", "G. Coombs", "S. Coquereau", "G. Corti", "M. Corvo", "C. M. Costa Sobral", "B. Couturier", "G. A. Cowan", "D. C. Craik", "A. Crocombe", "M. Cruz Torres", "S. Cunliffe", "R. Currie", "C. D'Ambrosio", "F. Da Cunha Marinho", "E. Dall'Occo", "J. Dalseno", "P. N. Y. David", "A. Davis", "K. De Bruyn", "S. De Capua", "M. De Cian", "J. M. De Miranda", "L. De Paula", "M. De Serio", "P. De Simone", "C. T. Dean", "D. Decamp", "M. Deckenhoff", "L. Del Buono", "M. Demmer", "A. Dendek", "D. Derkach", "O. Deschamps", "F. Dettori", "B. Dey", "A. Di Canto", "H. Dijkstra", "F. Dordei", "M. Dorigo", "A. Dosil Suárez", "A. Dovbnya", "K. Dreimanis", "L. Dufour", "G. Dujany", "K. Dungs", "P. Durante", "R. Dzhelyadin", "A. Dziurda", "A. Dzyuba", "N. Déléage", "S. Easo", "M. Ebert", "U. Egede", "V. Egorychev", "S. Eidelman", "S. Eisenhardt", "U. Eitschberger", "R. Ekelhof", "L. Eklund", "S. Ely", "S. Esen", "H. M. Evans", "T. Evans", "A. Falabella", "N. Farley", "S. Farry", "R. Fay", "D. Fazzini", "D. Ferguson", "A. Fernandez Prieto", "F. Ferrari", "F. Ferreira Rodrigues", "M. Ferro-Luzzi", "S. Filippov", "R. A. Fini", "M. Fiore", "M. Fiorini", "M. Firlej", "C. Fitzpatrick", "T. Fiutowski", "F. Fleuret", "K. Fohl", "M. Fontana", "F. Fontanelli", "D. C. Forshaw", "R. Forty", "V. Franco Lima", "M. Frank", "C. Frei", "J. Fu", "W. Funk", "E. Furfaro", "C. Färber", "A. Gallas Torreira", "D. Galli", "S. Gallorini", "S. Gambetta", "M. Gandelman", "P. Gandini", "Y. Gao", "L. M. Garcia Martin", "J. García Pardiñas", "J. Garra Tico", "L. Garrido", "P. J. Garsed", "D. Gascon", "C. Gaspar", "L. Gavardi", "G. Gazzoni", "D. Gerick", "E. Gersabeck", "M. Gersabeck", "T. Gershon", "Ph. Ghez", "S. Gianì", "V. Gibson", "O. G. Girard", "L. Giubega", "K. Gizdov", "V. V. Gligorov", "D. Golubkov", "A. Golutvin", "A. Gomes", "I. V. Gorelov", "C. Gotti", "R. Graciani Diaz", "L. A. Granado Cardoso", "E. Graugés", "E. Graverini", "G. Graziani", "A. Grecu", "P. Griffith", "L. Grillo", "B. R. Gruberg Cazon", "O. Grünberg", "E. Gushchin", "Yu. Guz", "T. Gys", "C. Göbel", "T. Hadavizadeh", "C. Hadjivasiliou", "G. Haefeli", "C. Haen", "S. C. Haines", "B. Hamilton", "X. Han", "S. Hansmann-Menzemer", "N. Harnew", "S. T. Harnew", "J. Harrison", "M. Hatch", "J. He", "T. Head", "A. Heister", "K. Hennessy", "P. Henrard", "L. Henry", "E. van Herwijnen", "M. Heß", "A. Hicheur", "D. Hill", "C. Hombach", "H. Hopchev", "W. Hulsbergen", "T. Humair", "M. Hushchyn", "D. Hutchcroft", "M. Idzik", "P. Ilten", "R. Jacobsson", "A. Jaeger", "J. Jalocha", "E. Jans", "A. Jawahery", "F. Jiang", "M. John", "D. Johnson", "C. R. Jones", "C. Joram", "B. Jost", "N. Jurik", "S. Kandybei", "M. Karacson", "J. M. Kariuki", "S. Karodia", "M. Kecke", "M. Kelsey", "M. Kenzie", "T. Ketel", "E. Khairullin", "B. Khanji", "C. Khurewathanakul", "T. Kirn", "S. Klaver", "K. Klimaszewski", "S. Koliiev", "M. Kolpin", "I. Komarov", "R. F. Koopman", "P. Koppenburg", "A. Kosmyntseva", "A. Kozachuk", "M. Kozeiha", "L. Kravchuk", "K. Kreplin", "M. Kreps", "P. Krokovny", "F. Kruse", "W. Krzemien", "W. Kucewicz", "M. Kucharczyk", "V. Kudryavtsev", "A. K. Kuonen", "K. Kurek", "T. Kvaratskheliya", "D. Lacarrere", "G. Lafferty", "A. Lai", "G. Lanfranchi", "C. Langenbruch", "T. Latham", "C. Lazzeroni", "R. Le Gac", "J. van Leerdam", "A. Leflat", "J. Lefrançois", "R. Lefèvre", "F. Lemaitre", "E. Lemos Cid", "O. Leroy", "T. Lesiak", "B. Leverington", "T. Li", "Y. Li", "T. Likhomanenko", "R. Lindner", "C. Linn", "F. Lionetto", "X. Liu", "D. Loh", "I. Longstaff", "J. H. Lopes", "D. Lucchesi", "M. Lucio Martinez", "H. Luo", "A. Lupato", "E. Luppi", "O. Lupton", "A. Lusiani", "X. Lyu", "F. Machefert", "F. Maciuc", "O. Maev", "K. Maguire", "S. Malde", "A. Malinin", "T. Maltsev", "G. Manca", "G. Mancinelli", "P. Manning", "J. Maratas", "J. F. Marchand", "U. Marconi", "C. Marin Benito", "M. Marinangeli", "P. Marino", "J. Marks", "G. Martellotti", "M. Martin", "M. Martinelli", "D. Martinez Santos", "F. Martinez Vidal", "D. Martins Tostes", "L. M. Massacrier", "A. Massafferri", "R. Matev", "A. Mathad", "Z. Mathe", "C. Matteuzzi", "A. Mauri", "E. Maurice", "B. Maurin", "A. Mazurov", "M. McCann", "A. McNab", "R. McNulty", "B. Meadows", "F. Meier", "M. Meissner", "D. Melnychuk", "M. Merk", "A. Merli", "E. Michielin", "D. A. Milanes", "M. -N. Minard", "D. S. Mitzel", "A. Mogini", "J. Molina Rodriguez", "I. A. Monroy", "S. Monteil", "M. Morandin", "P. Morawski", "A. Mordà", "M. J. Morello", "O. Morgunova", "J. Moron", "A. B. Morris", "R. Mountain", "F. Muheim", "M. Mulder", "M. Mussini", "D. Müller", "J. Müller", "K. Müller", "V. Müller", "P. Naik", "T. Nakada", "R. Nandakumar", "A. Nandi", "I. Nasteva", "M. Needham", "N. Neri", "S. Neubert", "N. Neufeld", "M. Neuner", "T. D. Nguyen", "C. Nguyen-Mau", "S. Nieswand", "R. Niet", "N. Nikitin", "T. Nikodem", "A. Nogay", "A. Novoselov", "D. P. O'Hanlon", "A. Oblakowska-Mucha", "V. Obraztsov", "S. Ogilvy", "R. Oldeman", "C. J. G. Onderwater", "J. M. Otalora Goicochea", "A. Otto", "P. Owen", "A. Oyanguren", "P. R. Pais", "A. Palano", "M. Palutan", "A. Papanestis", "M. Pappagallo", "L. L. Pappalardo", "W. Parker", "C. Parkes", "G. Passaleva", "A. Pastore", "G. D. Patel", "M. Patel", "C. Patrignani", "A. Pearce", "A. Pellegrino", "G. Penso", "M. Pepe Altarelli", "S. Perazzini", "P. Perret", "L. Pescatore", "K. Petridis", "A. Petrolini", "A. Petrov", "M. Petruzzo", "E. Picatoste Olloqui", "B. Pietrzyk", "M. Pikies", "D. Pinci", "A. Pistone", "A. Piucci", "V. Placinta", "S. Playfer", "M. Plo Casasus", "T. Poikela", "F. Polci", "A. Poluektov", "I. Polyakov", "E. Polycarpo", "G. J. Pomery", "A. Popov", "D. Popov", "B. Popovici", "S. Poslavskii", "C. Potterat", "E. Price", "J. D. Price", "J. Prisciandaro", "A. Pritchard", "C. Prouve", "V. Pugatch", "A. Puig Navarro", "G. Punzi", "W. Qian", "R. Quagliani", "B. Rachwal", "J. H. Rademacker", "M. Rama", "M. Ramos Pernas", "M. S. Rangel", "I. Raniuk", "F. Ratnikov", "G. Raven", "F. Redi", "S. Reichert", "A. C. dos Reis", "C. Remon Alepuz", "V. Renaudin", "S. Ricciardi", "S. Richards", "M. Rihl", "K. Rinnert", "V. Rives Molina", "P. Robbe", "A. B. Rodrigues", "E. Rodrigues", "J. A. Rodriguez Lopez", "P. Rodriguez Perez", "A. Rogozhnikov", "S. Roiser", "A. Rollings", "V. Romanovskiy", "A. Romero Vidal", "J. W. Ronayne", "M. Rotondo", "M. S. Rudolph", "T. Ruf", "P. Ruiz Valls", "J. J. Saborido Silva", "E. Sadykhov", "N. Sagidova", "B. Saitta", "V. Salustino Guimaraes", "C. Sanchez Mayordomo", "B. Sanmartin Sedes", "R. Santacesaria", "C. Santamarina Rios", "M. Santimaria", "E. Santovetti", "A. Sarti", "C. Satriano", "A. Satta", "D. M. Saunders", "D. Savrina", "S. Schael", "M. Schellenberg", "M. Schiller", "H. Schindler", "M. Schlupp", "M. Schmelling", "T. Schmelzer", "B. Schmidt", "O. Schneider", "A. Schopper", "K. Schubert", "M. Schubiger", "M. -H. Schune", "R. Schwemmer", "B. Sciascia", "A. Sciubba", "A. Semennikov", "A. Sergi", "N. Serra", "J. Serrano", "L. Sestini", "P. Seyfert", "M. Shapkin", "I. Shapoval", "Y. Shcheglov", "T. Shears", "L. Shekhtman", "V. Shevchenko", "B. G. Siddi", "R. Silva Coutinho", "L. Silva de Oliveira", "G. Simi", "S. Simone", "M. Sirendi", "N. Skidmore", "T. Skwarnicki", "E. Smith", "I. T. Smith", "J. Smith", "M. Smith", "H. Snoek", "l. Soares Lavra", "M. D. Sokoloff", "F. J. P. Soler", "B. Souza De Paula", "B. Spaan", "P. Spradlin", "S. Sridharan", "F. Stagni", "M. Stahl", "S. Stahl", "P. Stefko", "S. Stefkova", "O. Steinkamp", "S. Stemmle", "O. Stenyakin", "H. Stevens", "S. Stevenson", "S. Stoica", "S. Stone", "B. Storaci", "S. Stracka", "M. Straticiuc", "U. Straumann", "L. Sun", "W. Sutcliffe", "K. Swientek", "V. Syropoulos", "M. Szczekowski", "T. Szumlak", "S. T'Jampens", "A. Tayduganov", "T. Tekampe", "G. Tellarini", "F. Teubert", "E. Thomas", "J. van Tilburg", "M. J. Tilley", "V. Tisserand", "M. Tobin", "S. Tolk", "L. Tomassetti", "D. Tonelli", "S. Topp-Joergensen", "F. Toriello", "E. Tournefier", "S. Tourneur", "K. Trabelsi", "M. Traill", "M. T. Tran", "M. Tresch", "A. Trisovic", "A. Tsaregorodtsev", "P. Tsopelas", "A. Tully", "N. Tuning", "A. Ukleja", "A. Ustyuzhanin", "U. Uwer", "C. Vacca", "V. Vagnoni", "A. Valassi", "S. Valat", "G. Valenti", "R. Vazquez Gomez", "P. Vazquez Regueiro", "S. Vecchi", "M. van Veghel", "J. J. Velthuis", "M. Veltri", "G. Veneziano", "A. Venkateswaran", "M. Vernet", "M. Vesterinen", "J. V. Viana Barbosa", "B. Viaud", "D. Vieira", "M. Vieites Diaz", "H. Viemann", "X. Vilasis-Cardona", "M. Vitti", "V. Volkov", "A. Vollhardt", "B. Voneki", "A. Vorobyev", "V. Vorobyev", "C. Voß", "J. A. de Vries", "C. Vázquez Sierra", "R. Waldi", "C. Wallace", "R. Wallace", "J. Walsh", "J. Wang", "D. R. Ward", "H. M. Wark", "N. K. Watson", "D. Websdale", "A. Weiden", "M. Whitehead", "J. Wicht", "G. Wilkinson", "M. Wilkinson", "M. Williams", "M. P. Williams", "M. Williams", "T. Williams", "F. F. Wilson", "J. Wimberley", "J. Wishahi", "W. Wislicki", "M. Witek", "G. Wormser", "S. A. Wotton", "K. Wraight", "K. Wyllie", "Y. Xie", "Z. Xing", "Z. Xu", "Z. Yang", "Y. Yao", "H. Yin", "J. Yu", "X. Yuan", "O. Yushchenko", "K. A. Zarebski", "M. Zavertyaev", "L. Zhang", "Y. Zhang", "A. Zhelezov", "Y. Zheng", "X. Zhu", "V. Zhukov", "S. Zucchelli" ], "categories": [ "hep-ex" ], "primary_category": "hep-ex", "published": "20170226160819", "title": "Observation of the decay $B_{s}^{0} \\to η_{c} φ$ and evidence for $B_{s}^{0} \\to η_{c} π^{+} π^{-} $" }